Test Report: KVM_Linux_crio 21682

                    
                      7a7892355cfa060afe2cc9d2507b1d1308b66169:2025-10-02:41740
                    
                

Test fail (4/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.76
121 TestFunctional/parallel/ImageCommands/ImageListShort 2.25
244 TestPreload 159.63
291 TestPause/serial/SecondStartNoReconfiguration 75.24
x
+
TestAddons/parallel/Ingress (158.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-760875 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-760875 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-760875 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1a448908-fb28-4d4e-9861-f29c1b50e494] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1a448908-fb28-4d4e-9861-f29c1b50e494] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004369211s
I1002 20:23:09.692757  497569 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-760875 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.122237573s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-760875 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.220
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-760875 -n addons-760875
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-760875 logs -n 25: (1.260197678s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-533787                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-533787 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ --download-only -p binary-mirror-182362 --alsologtostderr --binary-mirror http://127.0.0.1:37441 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-182362 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ -p binary-mirror-182362                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-182362 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ addons  │ disable dashboard -p addons-760875                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ addons  │ enable dashboard -p addons-760875                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ start   │ -p addons-760875 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ addons-760875 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ addons-760875 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ enable headlamp -p addons-760875 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ addons-760875 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ addons-760875 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ addons-760875 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ ip      │ addons-760875 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ addons-760875 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ addons-760875 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ addons-760875 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:23 UTC │ 02 Oct 25 20:23 UTC │
	│ ssh     │ addons-760875 ssh cat /opt/local-path-provisioner/pvc-90967178-cfba-4823-8096-89c566fceab3_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:23 UTC │ 02 Oct 25 20:23 UTC │
	│ addons  │ addons-760875 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:23 UTC │ 02 Oct 25 20:23 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-760875                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:23 UTC │ 02 Oct 25 20:23 UTC │
	│ addons  │ addons-760875 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:23 UTC │ 02 Oct 25 20:23 UTC │
	│ addons  │ addons-760875 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:23 UTC │ 02 Oct 25 20:23 UTC │
	│ ssh     │ addons-760875 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:23 UTC │                     │
	│ addons  │ addons-760875 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:23 UTC │ 02 Oct 25 20:23 UTC │
	│ addons  │ addons-760875 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:23 UTC │ 02 Oct 25 20:23 UTC │
	│ ip      │ addons-760875 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-760875        │ jenkins │ v1.37.0 │ 02 Oct 25 20:25 UTC │ 02 Oct 25 20:25 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:54.258853  498295 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:54.259098  498295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:54.259107  498295 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:54.259111  498295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:54.259319  498295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 20:18:54.259855  498295 out.go:368] Setting JSON to false
	I1002 20:18:54.260666  498295 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3669,"bootTime":1759432665,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:18:54.260768  498295 start.go:140] virtualization: kvm guest
	I1002 20:18:54.262212  498295 out.go:179] * [addons-760875] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:18:54.263409  498295 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:18:54.263406  498295 notify.go:220] Checking for updates...
	I1002 20:18:54.265152  498295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:54.266048  498295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 20:18:54.266966  498295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 20:18:54.267839  498295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:18:54.268683  498295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:18:54.269601  498295 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:18:54.298956  498295 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 20:18:54.299809  498295 start.go:304] selected driver: kvm2
	I1002 20:18:54.299834  498295 start.go:924] validating driver "kvm2" against <nil>
	I1002 20:18:54.299851  498295 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:18:54.300870  498295 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:54.300956  498295 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:18:54.314096  498295 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:18:54.314120  498295 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:18:54.328525  498295 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:18:54.328565  498295 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:18:54.328862  498295 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:18:54.328894  498295 cni.go:84] Creating CNI manager for ""
	I1002 20:18:54.328939  498295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:18:54.328947  498295 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:18:54.328992  498295 start.go:348] cluster config:
	{Name:addons-760875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-760875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1002 20:18:54.329078  498295 iso.go:125] acquiring lock: {Name:mk7586bb79dc7f44da54ee16895643204aac50ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:54.330840  498295 out.go:179] * Starting "addons-760875" primary control-plane node in "addons-760875" cluster
	I1002 20:18:54.331601  498295 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:54.331642  498295 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:18:54.331660  498295 cache.go:58] Caching tarball of preloaded images
	I1002 20:18:54.331777  498295 preload.go:233] Found /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:18:54.331790  498295 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:18:54.332082  498295 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/config.json ...
	I1002 20:18:54.332103  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/config.json: {Name:mkd04be364c5a71d8269b4f91ba722c93d6f0aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:54.332257  498295 start.go:360] acquireMachinesLock for addons-760875: {Name:mk9e7957cdce1fd4b26ce430105927ec465bcae0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 20:18:54.332304  498295 start.go:364] duration metric: took 34.628µs to acquireMachinesLock for "addons-760875"
	I1002 20:18:54.332321  498295 start.go:93] Provisioning new machine with config: &{Name:addons-760875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-760875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:18:54.332383  498295 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 20:18:54.333496  498295 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1002 20:18:54.333650  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:18:54.333693  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:18:54.346218  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I1002 20:18:54.346718  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:18:54.347235  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:18:54.347285  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:18:54.347670  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:18:54.347881  498295 main.go:141] libmachine: (addons-760875) Calling .GetMachineName
	I1002 20:18:54.348052  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:18:54.348213  498295 start.go:159] libmachine.API.Create for "addons-760875" (driver="kvm2")
	I1002 20:18:54.348243  498295 client.go:168] LocalClient.Create starting
	I1002 20:18:54.348277  498295 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem
	I1002 20:18:54.453247  498295 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem
	I1002 20:18:54.671457  498295 main.go:141] libmachine: Running pre-create checks...
	I1002 20:18:54.671478  498295 main.go:141] libmachine: (addons-760875) Calling .PreCreateCheck
	I1002 20:18:54.671963  498295 main.go:141] libmachine: (addons-760875) Calling .GetConfigRaw
	I1002 20:18:54.672420  498295 main.go:141] libmachine: Creating machine...
	I1002 20:18:54.672436  498295 main.go:141] libmachine: (addons-760875) Calling .Create
	I1002 20:18:54.672600  498295 main.go:141] libmachine: (addons-760875) creating domain...
	I1002 20:18:54.672625  498295 main.go:141] libmachine: (addons-760875) creating network...
	I1002 20:18:54.674137  498295 main.go:141] libmachine: (addons-760875) DBG | found existing default network
	I1002 20:18:54.674284  498295 main.go:141] libmachine: (addons-760875) DBG | <network>
	I1002 20:18:54.674308  498295 main.go:141] libmachine: (addons-760875) DBG |   <name>default</name>
	I1002 20:18:54.674324  498295 main.go:141] libmachine: (addons-760875) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 20:18:54.674353  498295 main.go:141] libmachine: (addons-760875) DBG |   <forward mode='nat'>
	I1002 20:18:54.674363  498295 main.go:141] libmachine: (addons-760875) DBG |     <nat>
	I1002 20:18:54.674374  498295 main.go:141] libmachine: (addons-760875) DBG |       <port start='1024' end='65535'/>
	I1002 20:18:54.674386  498295 main.go:141] libmachine: (addons-760875) DBG |     </nat>
	I1002 20:18:54.674396  498295 main.go:141] libmachine: (addons-760875) DBG |   </forward>
	I1002 20:18:54.674409  498295 main.go:141] libmachine: (addons-760875) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 20:18:54.674420  498295 main.go:141] libmachine: (addons-760875) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 20:18:54.674430  498295 main.go:141] libmachine: (addons-760875) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 20:18:54.674442  498295 main.go:141] libmachine: (addons-760875) DBG |     <dhcp>
	I1002 20:18:54.674454  498295 main.go:141] libmachine: (addons-760875) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 20:18:54.674469  498295 main.go:141] libmachine: (addons-760875) DBG |     </dhcp>
	I1002 20:18:54.674477  498295 main.go:141] libmachine: (addons-760875) DBG |   </ip>
	I1002 20:18:54.674481  498295 main.go:141] libmachine: (addons-760875) DBG | </network>
	I1002 20:18:54.674490  498295 main.go:141] libmachine: (addons-760875) DBG | 
	I1002 20:18:54.675681  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:54.675510  498323 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123550}
	I1002 20:18:54.675768  498295 main.go:141] libmachine: (addons-760875) DBG | defining private network:
	I1002 20:18:54.675790  498295 main.go:141] libmachine: (addons-760875) DBG | 
	I1002 20:18:54.675799  498295 main.go:141] libmachine: (addons-760875) DBG | <network>
	I1002 20:18:54.675806  498295 main.go:141] libmachine: (addons-760875) DBG |   <name>mk-addons-760875</name>
	I1002 20:18:54.675816  498295 main.go:141] libmachine: (addons-760875) DBG |   <dns enable='no'/>
	I1002 20:18:54.675828  498295 main.go:141] libmachine: (addons-760875) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 20:18:54.675841  498295 main.go:141] libmachine: (addons-760875) DBG |     <dhcp>
	I1002 20:18:54.675850  498295 main.go:141] libmachine: (addons-760875) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 20:18:54.675862  498295 main.go:141] libmachine: (addons-760875) DBG |     </dhcp>
	I1002 20:18:54.675872  498295 main.go:141] libmachine: (addons-760875) DBG |   </ip>
	I1002 20:18:54.675881  498295 main.go:141] libmachine: (addons-760875) DBG | </network>
	I1002 20:18:54.675894  498295 main.go:141] libmachine: (addons-760875) DBG | 
	I1002 20:18:54.681175  498295 main.go:141] libmachine: (addons-760875) DBG | creating private network mk-addons-760875 192.168.39.0/24...
	I1002 20:18:54.743622  498295 main.go:141] libmachine: (addons-760875) DBG | private network mk-addons-760875 192.168.39.0/24 created
	I1002 20:18:54.743864  498295 main.go:141] libmachine: (addons-760875) DBG | <network>
	I1002 20:18:54.743887  498295 main.go:141] libmachine: (addons-760875) DBG |   <name>mk-addons-760875</name>
	I1002 20:18:54.743900  498295 main.go:141] libmachine: (addons-760875) setting up store path in /home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875 ...
	I1002 20:18:54.743931  498295 main.go:141] libmachine: (addons-760875) building disk image from file:///home/jenkins/minikube-integration/21682-492630/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 20:18:54.743948  498295 main.go:141] libmachine: (addons-760875) DBG |   <uuid>19f86392-b0f9-4837-a36e-24edaef318f5</uuid>
	I1002 20:18:54.743961  498295 main.go:141] libmachine: (addons-760875) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1002 20:18:54.743973  498295 main.go:141] libmachine: (addons-760875) DBG |   <mac address='52:54:00:58:8c:8d'/>
	I1002 20:18:54.743990  498295 main.go:141] libmachine: (addons-760875) Downloading /home/jenkins/minikube-integration/21682-492630/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21682-492630/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1002 20:18:54.744003  498295 main.go:141] libmachine: (addons-760875) DBG |   <dns enable='no'/>
	I1002 20:18:54.744017  498295 main.go:141] libmachine: (addons-760875) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 20:18:54.744027  498295 main.go:141] libmachine: (addons-760875) DBG |     <dhcp>
	I1002 20:18:54.744047  498295 main.go:141] libmachine: (addons-760875) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 20:18:54.744060  498295 main.go:141] libmachine: (addons-760875) DBG |     </dhcp>
	I1002 20:18:54.744090  498295 main.go:141] libmachine: (addons-760875) DBG |   </ip>
	I1002 20:18:54.744114  498295 main.go:141] libmachine: (addons-760875) DBG | </network>
	I1002 20:18:54.744131  498295 main.go:141] libmachine: (addons-760875) DBG | 
	I1002 20:18:54.744147  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:54.743855  498323 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 20:18:55.064964  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:55.064856  498323 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa...
	I1002 20:18:55.375219  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:55.375086  498323 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/addons-760875.rawdisk...
	I1002 20:18:55.375239  498295 main.go:141] libmachine: (addons-760875) DBG | Writing magic tar header
	I1002 20:18:55.375250  498295 main.go:141] libmachine: (addons-760875) DBG | Writing SSH key tar header
	I1002 20:18:55.375258  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:55.375215  498323 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875 ...
	I1002 20:18:55.375346  498295 main.go:141] libmachine: (addons-760875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875
	I1002 20:18:55.375361  498295 main.go:141] libmachine: (addons-760875) setting executable bit set on /home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875 (perms=drwx------)
	I1002 20:18:55.375368  498295 main.go:141] libmachine: (addons-760875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21682-492630/.minikube/machines
	I1002 20:18:55.375378  498295 main.go:141] libmachine: (addons-760875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 20:18:55.375388  498295 main.go:141] libmachine: (addons-760875) setting executable bit set on /home/jenkins/minikube-integration/21682-492630/.minikube/machines (perms=drwxr-xr-x)
	I1002 20:18:55.375395  498295 main.go:141] libmachine: (addons-760875) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21682-492630
	I1002 20:18:55.375405  498295 main.go:141] libmachine: (addons-760875) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 20:18:55.375412  498295 main.go:141] libmachine: (addons-760875) setting executable bit set on /home/jenkins/minikube-integration/21682-492630/.minikube (perms=drwxr-xr-x)
	I1002 20:18:55.375416  498295 main.go:141] libmachine: (addons-760875) DBG | checking permissions on dir: /home/jenkins
	I1002 20:18:55.375433  498295 main.go:141] libmachine: (addons-760875) DBG | checking permissions on dir: /home
	I1002 20:18:55.375440  498295 main.go:141] libmachine: (addons-760875) DBG | skipping /home - not owner
	I1002 20:18:55.375448  498295 main.go:141] libmachine: (addons-760875) setting executable bit set on /home/jenkins/minikube-integration/21682-492630 (perms=drwxrwxr-x)
	I1002 20:18:55.375454  498295 main.go:141] libmachine: (addons-760875) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 20:18:55.375460  498295 main.go:141] libmachine: (addons-760875) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 20:18:55.375468  498295 main.go:141] libmachine: (addons-760875) defining domain...
	I1002 20:18:55.376583  498295 main.go:141] libmachine: (addons-760875) defining domain using XML: 
	I1002 20:18:55.376602  498295 main.go:141] libmachine: (addons-760875) <domain type='kvm'>
	I1002 20:18:55.376607  498295 main.go:141] libmachine: (addons-760875)   <name>addons-760875</name>
	I1002 20:18:55.376612  498295 main.go:141] libmachine: (addons-760875)   <memory unit='MiB'>4096</memory>
	I1002 20:18:55.376617  498295 main.go:141] libmachine: (addons-760875)   <vcpu>2</vcpu>
	I1002 20:18:55.376620  498295 main.go:141] libmachine: (addons-760875)   <features>
	I1002 20:18:55.376625  498295 main.go:141] libmachine: (addons-760875)     <acpi/>
	I1002 20:18:55.376634  498295 main.go:141] libmachine: (addons-760875)     <apic/>
	I1002 20:18:55.376690  498295 main.go:141] libmachine: (addons-760875)     <pae/>
	I1002 20:18:55.376733  498295 main.go:141] libmachine: (addons-760875)   </features>
	I1002 20:18:55.376748  498295 main.go:141] libmachine: (addons-760875)   <cpu mode='host-passthrough'>
	I1002 20:18:55.376756  498295 main.go:141] libmachine: (addons-760875)   </cpu>
	I1002 20:18:55.376766  498295 main.go:141] libmachine: (addons-760875)   <os>
	I1002 20:18:55.376775  498295 main.go:141] libmachine: (addons-760875)     <type>hvm</type>
	I1002 20:18:55.376782  498295 main.go:141] libmachine: (addons-760875)     <boot dev='cdrom'/>
	I1002 20:18:55.376788  498295 main.go:141] libmachine: (addons-760875)     <boot dev='hd'/>
	I1002 20:18:55.376798  498295 main.go:141] libmachine: (addons-760875)     <bootmenu enable='no'/>
	I1002 20:18:55.376811  498295 main.go:141] libmachine: (addons-760875)   </os>
	I1002 20:18:55.376822  498295 main.go:141] libmachine: (addons-760875)   <devices>
	I1002 20:18:55.376830  498295 main.go:141] libmachine: (addons-760875)     <disk type='file' device='cdrom'>
	I1002 20:18:55.376846  498295 main.go:141] libmachine: (addons-760875)       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/boot2docker.iso'/>
	I1002 20:18:55.376855  498295 main.go:141] libmachine: (addons-760875)       <target dev='hdc' bus='scsi'/>
	I1002 20:18:55.376861  498295 main.go:141] libmachine: (addons-760875)       <readonly/>
	I1002 20:18:55.376867  498295 main.go:141] libmachine: (addons-760875)     </disk>
	I1002 20:18:55.376873  498295 main.go:141] libmachine: (addons-760875)     <disk type='file' device='disk'>
	I1002 20:18:55.376882  498295 main.go:141] libmachine: (addons-760875)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 20:18:55.376892  498295 main.go:141] libmachine: (addons-760875)       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/addons-760875.rawdisk'/>
	I1002 20:18:55.376899  498295 main.go:141] libmachine: (addons-760875)       <target dev='hda' bus='virtio'/>
	I1002 20:18:55.376904  498295 main.go:141] libmachine: (addons-760875)     </disk>
	I1002 20:18:55.376910  498295 main.go:141] libmachine: (addons-760875)     <interface type='network'>
	I1002 20:18:55.376916  498295 main.go:141] libmachine: (addons-760875)       <source network='mk-addons-760875'/>
	I1002 20:18:55.376922  498295 main.go:141] libmachine: (addons-760875)       <model type='virtio'/>
	I1002 20:18:55.376927  498295 main.go:141] libmachine: (addons-760875)     </interface>
	I1002 20:18:55.376933  498295 main.go:141] libmachine: (addons-760875)     <interface type='network'>
	I1002 20:18:55.376938  498295 main.go:141] libmachine: (addons-760875)       <source network='default'/>
	I1002 20:18:55.376944  498295 main.go:141] libmachine: (addons-760875)       <model type='virtio'/>
	I1002 20:18:55.376985  498295 main.go:141] libmachine: (addons-760875)     </interface>
	I1002 20:18:55.377009  498295 main.go:141] libmachine: (addons-760875)     <serial type='pty'>
	I1002 20:18:55.377018  498295 main.go:141] libmachine: (addons-760875)       <target port='0'/>
	I1002 20:18:55.377031  498295 main.go:141] libmachine: (addons-760875)     </serial>
	I1002 20:18:55.377043  498295 main.go:141] libmachine: (addons-760875)     <console type='pty'>
	I1002 20:18:55.377055  498295 main.go:141] libmachine: (addons-760875)       <target type='serial' port='0'/>
	I1002 20:18:55.377066  498295 main.go:141] libmachine: (addons-760875)     </console>
	I1002 20:18:55.377075  498295 main.go:141] libmachine: (addons-760875)     <rng model='virtio'>
	I1002 20:18:55.377088  498295 main.go:141] libmachine: (addons-760875)       <backend model='random'>/dev/random</backend>
	I1002 20:18:55.377105  498295 main.go:141] libmachine: (addons-760875)     </rng>
	I1002 20:18:55.377113  498295 main.go:141] libmachine: (addons-760875)   </devices>
	I1002 20:18:55.377122  498295 main.go:141] libmachine: (addons-760875) </domain>
	I1002 20:18:55.377136  498295 main.go:141] libmachine: (addons-760875) 
	I1002 20:18:55.383164  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:20:7d:d6 in network default
	I1002 20:18:55.383726  498295 main.go:141] libmachine: (addons-760875) starting domain...
	I1002 20:18:55.383747  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:18:55.383752  498295 main.go:141] libmachine: (addons-760875) ensuring networks are active...
	I1002 20:18:55.384352  498295 main.go:141] libmachine: (addons-760875) Ensuring network default is active
	I1002 20:18:55.384780  498295 main.go:141] libmachine: (addons-760875) Ensuring network mk-addons-760875 is active
	I1002 20:18:55.385396  498295 main.go:141] libmachine: (addons-760875) getting domain XML...
	I1002 20:18:55.386366  498295 main.go:141] libmachine: (addons-760875) DBG | starting domain XML:
	I1002 20:18:55.386383  498295 main.go:141] libmachine: (addons-760875) DBG | <domain type='kvm'>
	I1002 20:18:55.386392  498295 main.go:141] libmachine: (addons-760875) DBG |   <name>addons-760875</name>
	I1002 20:18:55.386399  498295 main.go:141] libmachine: (addons-760875) DBG |   <uuid>1cfe69eb-e8cf-4278-a015-d4098f2f3935</uuid>
	I1002 20:18:55.386407  498295 main.go:141] libmachine: (addons-760875) DBG |   <memory unit='KiB'>4194304</memory>
	I1002 20:18:55.386419  498295 main.go:141] libmachine: (addons-760875) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1002 20:18:55.386427  498295 main.go:141] libmachine: (addons-760875) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 20:18:55.386434  498295 main.go:141] libmachine: (addons-760875) DBG |   <os>
	I1002 20:18:55.386444  498295 main.go:141] libmachine: (addons-760875) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 20:18:55.386451  498295 main.go:141] libmachine: (addons-760875) DBG |     <boot dev='cdrom'/>
	I1002 20:18:55.386476  498295 main.go:141] libmachine: (addons-760875) DBG |     <boot dev='hd'/>
	I1002 20:18:55.386490  498295 main.go:141] libmachine: (addons-760875) DBG |     <bootmenu enable='no'/>
	I1002 20:18:55.386495  498295 main.go:141] libmachine: (addons-760875) DBG |   </os>
	I1002 20:18:55.386500  498295 main.go:141] libmachine: (addons-760875) DBG |   <features>
	I1002 20:18:55.386505  498295 main.go:141] libmachine: (addons-760875) DBG |     <acpi/>
	I1002 20:18:55.386521  498295 main.go:141] libmachine: (addons-760875) DBG |     <apic/>
	I1002 20:18:55.386526  498295 main.go:141] libmachine: (addons-760875) DBG |     <pae/>
	I1002 20:18:55.386530  498295 main.go:141] libmachine: (addons-760875) DBG |   </features>
	I1002 20:18:55.386546  498295 main.go:141] libmachine: (addons-760875) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 20:18:55.386559  498295 main.go:141] libmachine: (addons-760875) DBG |   <clock offset='utc'/>
	I1002 20:18:55.386570  498295 main.go:141] libmachine: (addons-760875) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 20:18:55.386580  498295 main.go:141] libmachine: (addons-760875) DBG |   <on_reboot>restart</on_reboot>
	I1002 20:18:55.386593  498295 main.go:141] libmachine: (addons-760875) DBG |   <on_crash>destroy</on_crash>
	I1002 20:18:55.386599  498295 main.go:141] libmachine: (addons-760875) DBG |   <devices>
	I1002 20:18:55.386612  498295 main.go:141] libmachine: (addons-760875) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 20:18:55.386622  498295 main.go:141] libmachine: (addons-760875) DBG |     <disk type='file' device='cdrom'>
	I1002 20:18:55.386646  498295 main.go:141] libmachine: (addons-760875) DBG |       <driver name='qemu' type='raw'/>
	I1002 20:18:55.386664  498295 main.go:141] libmachine: (addons-760875) DBG |       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/boot2docker.iso'/>
	I1002 20:18:55.386672  498295 main.go:141] libmachine: (addons-760875) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 20:18:55.386677  498295 main.go:141] libmachine: (addons-760875) DBG |       <readonly/>
	I1002 20:18:55.386683  498295 main.go:141] libmachine: (addons-760875) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 20:18:55.386689  498295 main.go:141] libmachine: (addons-760875) DBG |     </disk>
	I1002 20:18:55.386695  498295 main.go:141] libmachine: (addons-760875) DBG |     <disk type='file' device='disk'>
	I1002 20:18:55.386702  498295 main.go:141] libmachine: (addons-760875) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 20:18:55.386723  498295 main.go:141] libmachine: (addons-760875) DBG |       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/addons-760875.rawdisk'/>
	I1002 20:18:55.386735  498295 main.go:141] libmachine: (addons-760875) DBG |       <target dev='hda' bus='virtio'/>
	I1002 20:18:55.386746  498295 main.go:141] libmachine: (addons-760875) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 20:18:55.386756  498295 main.go:141] libmachine: (addons-760875) DBG |     </disk>
	I1002 20:18:55.386763  498295 main.go:141] libmachine: (addons-760875) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 20:18:55.386771  498295 main.go:141] libmachine: (addons-760875) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 20:18:55.386776  498295 main.go:141] libmachine: (addons-760875) DBG |     </controller>
	I1002 20:18:55.386782  498295 main.go:141] libmachine: (addons-760875) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 20:18:55.386787  498295 main.go:141] libmachine: (addons-760875) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 20:18:55.386795  498295 main.go:141] libmachine: (addons-760875) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 20:18:55.386800  498295 main.go:141] libmachine: (addons-760875) DBG |     </controller>
	I1002 20:18:55.386807  498295 main.go:141] libmachine: (addons-760875) DBG |     <interface type='network'>
	I1002 20:18:55.386822  498295 main.go:141] libmachine: (addons-760875) DBG |       <mac address='52:54:00:75:19:bc'/>
	I1002 20:18:55.386827  498295 main.go:141] libmachine: (addons-760875) DBG |       <source network='mk-addons-760875'/>
	I1002 20:18:55.386832  498295 main.go:141] libmachine: (addons-760875) DBG |       <model type='virtio'/>
	I1002 20:18:55.386837  498295 main.go:141] libmachine: (addons-760875) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 20:18:55.386841  498295 main.go:141] libmachine: (addons-760875) DBG |     </interface>
	I1002 20:18:55.386849  498295 main.go:141] libmachine: (addons-760875) DBG |     <interface type='network'>
	I1002 20:18:55.386862  498295 main.go:141] libmachine: (addons-760875) DBG |       <mac address='52:54:00:20:7d:d6'/>
	I1002 20:18:55.386869  498295 main.go:141] libmachine: (addons-760875) DBG |       <source network='default'/>
	I1002 20:18:55.386875  498295 main.go:141] libmachine: (addons-760875) DBG |       <model type='virtio'/>
	I1002 20:18:55.386882  498295 main.go:141] libmachine: (addons-760875) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 20:18:55.386887  498295 main.go:141] libmachine: (addons-760875) DBG |     </interface>
	I1002 20:18:55.386905  498295 main.go:141] libmachine: (addons-760875) DBG |     <serial type='pty'>
	I1002 20:18:55.386931  498295 main.go:141] libmachine: (addons-760875) DBG |       <target type='isa-serial' port='0'>
	I1002 20:18:55.386947  498295 main.go:141] libmachine: (addons-760875) DBG |         <model name='isa-serial'/>
	I1002 20:18:55.386959  498295 main.go:141] libmachine: (addons-760875) DBG |       </target>
	I1002 20:18:55.386969  498295 main.go:141] libmachine: (addons-760875) DBG |     </serial>
	I1002 20:18:55.386978  498295 main.go:141] libmachine: (addons-760875) DBG |     <console type='pty'>
	I1002 20:18:55.386988  498295 main.go:141] libmachine: (addons-760875) DBG |       <target type='serial' port='0'/>
	I1002 20:18:55.386996  498295 main.go:141] libmachine: (addons-760875) DBG |     </console>
	I1002 20:18:55.387006  498295 main.go:141] libmachine: (addons-760875) DBG |     <input type='mouse' bus='ps2'/>
	I1002 20:18:55.387015  498295 main.go:141] libmachine: (addons-760875) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 20:18:55.387029  498295 main.go:141] libmachine: (addons-760875) DBG |     <audio id='1' type='none'/>
	I1002 20:18:55.387042  498295 main.go:141] libmachine: (addons-760875) DBG |     <memballoon model='virtio'>
	I1002 20:18:55.387054  498295 main.go:141] libmachine: (addons-760875) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 20:18:55.387065  498295 main.go:141] libmachine: (addons-760875) DBG |     </memballoon>
	I1002 20:18:55.387072  498295 main.go:141] libmachine: (addons-760875) DBG |     <rng model='virtio'>
	I1002 20:18:55.387085  498295 main.go:141] libmachine: (addons-760875) DBG |       <backend model='random'>/dev/random</backend>
	I1002 20:18:55.387107  498295 main.go:141] libmachine: (addons-760875) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 20:18:55.387122  498295 main.go:141] libmachine: (addons-760875) DBG |     </rng>
	I1002 20:18:55.387133  498295 main.go:141] libmachine: (addons-760875) DBG |   </devices>
	I1002 20:18:55.387142  498295 main.go:141] libmachine: (addons-760875) DBG | </domain>
	I1002 20:18:55.387152  498295 main.go:141] libmachine: (addons-760875) DBG | 
	I1002 20:18:56.650603  498295 main.go:141] libmachine: (addons-760875) waiting for domain to start...
	I1002 20:18:56.652055  498295 main.go:141] libmachine: (addons-760875) domain is now running
	I1002 20:18:56.652077  498295 main.go:141] libmachine: (addons-760875) waiting for IP...
	I1002 20:18:56.653038  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:18:56.653559  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:18:56.653586  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:18:56.653867  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:18:56.653951  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:56.653896  498323 retry.go:31] will retry after 293.359811ms: waiting for domain to come up
	I1002 20:18:56.948701  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:18:56.949184  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:18:56.949214  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:18:56.949455  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:18:56.949500  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:56.949446  498323 retry.go:31] will retry after 357.476177ms: waiting for domain to come up
	I1002 20:18:57.309005  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:18:57.309436  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:18:57.309467  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:18:57.309803  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:18:57.309846  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:57.309773  498323 retry.go:31] will retry after 367.068585ms: waiting for domain to come up
	I1002 20:18:57.678220  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:18:57.678666  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:18:57.678692  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:18:57.679008  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:18:57.679039  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:57.678945  498323 retry.go:31] will retry after 517.581947ms: waiting for domain to come up
	I1002 20:18:58.198745  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:18:58.199256  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:18:58.199284  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:18:58.199508  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:18:58.199585  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:58.199517  498323 retry.go:31] will retry after 498.129519ms: waiting for domain to come up
	I1002 20:18:58.699446  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:18:58.700016  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:18:58.700038  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:18:58.700296  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:18:58.700335  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:58.700281  498323 retry.go:31] will retry after 579.602128ms: waiting for domain to come up
	I1002 20:18:59.281932  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:18:59.282467  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:18:59.282497  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:18:59.282767  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:18:59.282795  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:18:59.282736  498323 retry.go:31] will retry after 1.156932626s: waiting for domain to come up
	I1002 20:19:00.441620  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:00.442110  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:19:00.442132  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:19:00.442409  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:19:00.442474  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:19:00.442410  498323 retry.go:31] will retry after 912.477301ms: waiting for domain to come up
	I1002 20:19:01.356569  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:01.356982  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:19:01.357010  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:19:01.357264  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:19:01.357307  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:19:01.357236  498323 retry.go:31] will retry after 1.708021715s: waiting for domain to come up
	I1002 20:19:03.066567  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:03.067016  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:19:03.067046  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:19:03.067329  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:19:03.067360  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:19:03.067314  498323 retry.go:31] will retry after 2.149983004s: waiting for domain to come up
	I1002 20:19:05.219529  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:05.219990  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:19:05.220015  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:19:05.220353  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:19:05.220385  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:19:05.220329  498323 retry.go:31] will retry after 2.585860614s: waiting for domain to come up
	I1002 20:19:07.809323  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:07.809837  498295 main.go:141] libmachine: (addons-760875) DBG | no network interface addresses found for domain addons-760875 (source=lease)
	I1002 20:19:07.809860  498295 main.go:141] libmachine: (addons-760875) DBG | trying to list again with source=arp
	I1002 20:19:07.810208  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find current IP address of domain addons-760875 in network mk-addons-760875 (interfaces detected: [])
	I1002 20:19:07.810244  498295 main.go:141] libmachine: (addons-760875) DBG | I1002 20:19:07.810202  498323 retry.go:31] will retry after 2.831592438s: waiting for domain to come up
	I1002 20:19:10.644001  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:10.644752  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has current primary IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:10.644779  498295 main.go:141] libmachine: (addons-760875) found domain IP: 192.168.39.220
	I1002 20:19:10.644837  498295 main.go:141] libmachine: (addons-760875) reserving static IP address...
	I1002 20:19:10.645147  498295 main.go:141] libmachine: (addons-760875) DBG | unable to find host DHCP lease matching {name: "addons-760875", mac: "52:54:00:75:19:bc", ip: "192.168.39.220"} in network mk-addons-760875
	I1002 20:19:10.825208  498295 main.go:141] libmachine: (addons-760875) reserved static IP address 192.168.39.220 for domain addons-760875
	I1002 20:19:10.825238  498295 main.go:141] libmachine: (addons-760875) DBG | Getting to WaitForSSH function...
	I1002 20:19:10.825246  498295 main.go:141] libmachine: (addons-760875) waiting for SSH...
	I1002 20:19:10.827778  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:10.828279  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:10.828305  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:10.828409  498295 main.go:141] libmachine: (addons-760875) DBG | Using SSH client type: external
	I1002 20:19:10.828468  498295 main.go:141] libmachine: (addons-760875) DBG | Using SSH private key: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa (-rw-------)
	I1002 20:19:10.828502  498295 main.go:141] libmachine: (addons-760875) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 20:19:10.828525  498295 main.go:141] libmachine: (addons-760875) DBG | About to run SSH command:
	I1002 20:19:10.828540  498295 main.go:141] libmachine: (addons-760875) DBG | exit 0
	I1002 20:19:10.965654  498295 main.go:141] libmachine: (addons-760875) DBG | SSH cmd err, output: <nil>: 
	I1002 20:19:10.965938  498295 main.go:141] libmachine: (addons-760875) domain creation complete
	I1002 20:19:10.966309  498295 main.go:141] libmachine: (addons-760875) Calling .GetConfigRaw
	I1002 20:19:10.966923  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:10.967150  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:10.967317  498295 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 20:19:10.967335  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:10.968598  498295 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 20:19:10.968616  498295 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 20:19:10.968621  498295 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 20:19:10.968626  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:10.970892  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:10.971226  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:10.971257  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:10.971470  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:10.971649  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:10.971840  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:10.971975  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:10.972118  498295 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:10.972365  498295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1002 20:19:10.972376  498295 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 20:19:11.081259  498295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:19:11.081287  498295 main.go:141] libmachine: Detecting the provisioner...
	I1002 20:19:11.081298  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:11.084472  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.084870  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:11.084901  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.085105  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:11.085330  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.085517  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.085637  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:11.085805  498295 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.086043  498295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1002 20:19:11.086056  498295 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 20:19:11.200092  498295 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 20:19:11.200228  498295 main.go:141] libmachine: found compatible host: buildroot
	I1002 20:19:11.200253  498295 main.go:141] libmachine: Provisioning with buildroot...
	I1002 20:19:11.200267  498295 main.go:141] libmachine: (addons-760875) Calling .GetMachineName
	I1002 20:19:11.200568  498295 buildroot.go:166] provisioning hostname "addons-760875"
	I1002 20:19:11.200603  498295 main.go:141] libmachine: (addons-760875) Calling .GetMachineName
	I1002 20:19:11.200818  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:11.203677  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.204054  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:11.204091  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.204299  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:11.204500  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.204685  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.204841  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:11.205032  498295 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.205243  498295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1002 20:19:11.205254  498295 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-760875 && echo "addons-760875" | sudo tee /etc/hostname
	I1002 20:19:11.336365  498295 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-760875
	
	I1002 20:19:11.336398  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:11.341161  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.341599  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:11.341634  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.341876  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:11.342070  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.342228  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.342349  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:11.342506  498295 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.342746  498295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1002 20:19:11.342764  498295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-760875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-760875/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-760875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:19:11.461875  498295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:19:11.461905  498295 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21682-492630/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-492630/.minikube}
	I1002 20:19:11.461968  498295 buildroot.go:174] setting up certificates
	I1002 20:19:11.461985  498295 provision.go:84] configureAuth start
	I1002 20:19:11.462000  498295 main.go:141] libmachine: (addons-760875) Calling .GetMachineName
	I1002 20:19:11.462295  498295 main.go:141] libmachine: (addons-760875) Calling .GetIP
	I1002 20:19:11.465349  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.465752  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:11.465781  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.465954  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:11.468258  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.468674  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:11.468722  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.468847  498295 provision.go:143] copyHostCerts
	I1002 20:19:11.468936  498295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem (1078 bytes)
	I1002 20:19:11.469084  498295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem (1123 bytes)
	I1002 20:19:11.469174  498295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem (1675 bytes)
	I1002 20:19:11.469250  498295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem org=jenkins.addons-760875 san=[127.0.0.1 192.168.39.220 addons-760875 localhost minikube]
	I1002 20:19:11.557268  498295 provision.go:177] copyRemoteCerts
	I1002 20:19:11.557340  498295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:19:11.557394  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:11.560260  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.560643  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:11.560672  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.560854  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:11.561066  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.561207  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:11.561427  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:11.649061  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 20:19:11.676258  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:19:11.704257  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:19:11.729834  498295 provision.go:87] duration metric: took 267.828277ms to configureAuth
	I1002 20:19:11.729870  498295 buildroot.go:189] setting minikube options for container-runtime
	I1002 20:19:11.730068  498295 config.go:182] Loaded profile config "addons-760875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:11.730180  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:11.733305  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.733767  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:11.733811  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.734033  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:11.734239  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.734388  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.734513  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:11.734655  498295 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:11.734884  498295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1002 20:19:11.734899  498295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:19:11.967156  498295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:19:11.967187  498295 main.go:141] libmachine: Checking connection to Docker...
	I1002 20:19:11.967197  498295 main.go:141] libmachine: (addons-760875) Calling .GetURL
	I1002 20:19:11.968432  498295 main.go:141] libmachine: (addons-760875) DBG | using libvirt version 8000000
	I1002 20:19:11.970532  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.970904  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:11.970936  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.971061  498295 main.go:141] libmachine: Docker is up and running!
	I1002 20:19:11.971077  498295 main.go:141] libmachine: Reticulating splines...
	I1002 20:19:11.971093  498295 client.go:171] duration metric: took 17.622832967s to LocalClient.Create
	I1002 20:19:11.971133  498295 start.go:167] duration metric: took 17.622916705s to libmachine.API.Create "addons-760875"
	I1002 20:19:11.971144  498295 start.go:293] postStartSetup for "addons-760875" (driver="kvm2")
	I1002 20:19:11.971155  498295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:19:11.971174  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:11.971431  498295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:19:11.971459  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:11.973614  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.974011  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:11.974031  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:11.974259  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:11.974436  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:11.974579  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:11.974769  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:12.060950  498295 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:19:12.065439  498295 info.go:137] Remote host: Buildroot 2025.02
	I1002 20:19:12.065466  498295 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/addons for local assets ...
	I1002 20:19:12.065564  498295 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/files for local assets ...
	I1002 20:19:12.065602  498295 start.go:296] duration metric: took 94.44967ms for postStartSetup
	I1002 20:19:12.065648  498295 main.go:141] libmachine: (addons-760875) Calling .GetConfigRaw
	I1002 20:19:12.066262  498295 main.go:141] libmachine: (addons-760875) Calling .GetIP
	I1002 20:19:12.069110  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.069509  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:12.069547  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.069826  498295 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/config.json ...
	I1002 20:19:12.070063  498295 start.go:128] duration metric: took 17.737668491s to createHost
	I1002 20:19:12.070087  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:12.072460  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.072821  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:12.072849  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.072995  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:12.073191  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:12.073361  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:12.073491  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:12.073624  498295 main.go:141] libmachine: Using SSH client type: native
	I1002 20:19:12.073862  498295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1002 20:19:12.073875  498295 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 20:19:12.185878  498295 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759436352.148004462
	
	I1002 20:19:12.185902  498295 fix.go:216] guest clock: 1759436352.148004462
	I1002 20:19:12.185911  498295 fix.go:229] Guest: 2025-10-02 20:19:12.148004462 +0000 UTC Remote: 2025-10-02 20:19:12.07007754 +0000 UTC m=+17.852271617 (delta=77.926922ms)
	I1002 20:19:12.185942  498295 fix.go:200] guest clock delta is within tolerance: 77.926922ms
	I1002 20:19:12.185950  498295 start.go:83] releasing machines lock for "addons-760875", held for 17.853636707s
	I1002 20:19:12.185977  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:12.186253  498295 main.go:141] libmachine: (addons-760875) Calling .GetIP
	I1002 20:19:12.189270  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.189696  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:12.189746  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.189921  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:12.190429  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:12.190611  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:12.190746  498295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:19:12.190800  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:12.190863  498295 ssh_runner.go:195] Run: cat /version.json
	I1002 20:19:12.190890  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:12.194051  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.194302  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.194402  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:12.194454  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.194632  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:12.194810  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:12.194833  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:12.194857  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:12.195036  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:12.195048  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:12.195267  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:12.195287  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:12.195415  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:12.195531  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:12.297427  498295 ssh_runner.go:195] Run: systemctl --version
	I1002 20:19:12.303316  498295 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:19:12.457891  498295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:19:12.465017  498295 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:19:12.465080  498295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:19:12.483497  498295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:19:12.483520  498295 start.go:495] detecting cgroup driver to use...
	I1002 20:19:12.483579  498295 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:19:12.501795  498295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:19:12.518802  498295 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:19:12.518858  498295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:19:12.534656  498295 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:19:12.549751  498295 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:19:12.692336  498295 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:19:12.896805  498295 docker.go:234] disabling docker service ...
	I1002 20:19:12.896870  498295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:19:12.912153  498295 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:19:12.925635  498295 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:19:13.073935  498295 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:19:13.211307  498295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:19:13.228700  498295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:19:13.249732  498295 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:19:13.249807  498295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.260996  498295 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:19:13.261088  498295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.273007  498295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.285059  498295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.296682  498295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:19:13.308701  498295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.319653  498295 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.342034  498295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:19:13.353662  498295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:19:13.363190  498295 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 20:19:13.363231  498295 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 20:19:13.381016  498295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:19:13.391461  498295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:13.527004  498295 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:19:13.900619  498295 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:19:13.900750  498295 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:19:13.906408  498295 start.go:563] Will wait 60s for crictl version
	I1002 20:19:13.906479  498295 ssh_runner.go:195] Run: which crictl
	I1002 20:19:13.910157  498295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 20:19:13.944175  498295 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 20:19:13.944265  498295 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.970965  498295 ssh_runner.go:195] Run: crio --version
	I1002 20:19:13.999129  498295 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 20:19:14.000016  498295 main.go:141] libmachine: (addons-760875) Calling .GetIP
	I1002 20:19:14.002769  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:14.003170  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:14.003200  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:14.003440  498295 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 20:19:14.007593  498295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:14.021420  498295 kubeadm.go:883] updating cluster {Name:addons-760875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-760875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:19:14.021543  498295 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:19:14.021616  498295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:14.055347  498295 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 20:19:14.055404  498295 ssh_runner.go:195] Run: which lz4
	I1002 20:19:14.059130  498295 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 20:19:14.063505  498295 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 20:19:14.063527  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1002 20:19:15.380768  498295 crio.go:462] duration metric: took 1.321665931s to copy over tarball
	I1002 20:19:15.380837  498295 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 20:19:16.938575  498295 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.55771184s)
	I1002 20:19:16.938607  498295 crio.go:469] duration metric: took 1.557811454s to extract the tarball
	I1002 20:19:16.938614  498295 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 20:19:16.978395  498295 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:19:17.020412  498295 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:19:17.020442  498295 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:19:17.020450  498295 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.34.1 crio true true} ...
	I1002 20:19:17.020609  498295 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-760875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-760875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:19:17.020680  498295 ssh_runner.go:195] Run: crio config
	I1002 20:19:17.064438  498295 cni.go:84] Creating CNI manager for ""
	I1002 20:19:17.064462  498295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:19:17.064482  498295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:19:17.064504  498295 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-760875 NodeName:addons-760875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:19:17.064640  498295 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-760875"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:19:17.064987  498295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:19:17.077394  498295 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:19:17.077464  498295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:19:17.088675  498295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1002 20:19:17.109325  498295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:19:17.130019  498295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1002 20:19:17.148817  498295 ssh_runner.go:195] Run: grep 192.168.39.220	control-plane.minikube.internal$ /etc/hosts
	I1002 20:19:17.152631  498295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:19:17.165714  498295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:17.303160  498295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:19:17.333892  498295 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875 for IP: 192.168.39.220
	I1002 20:19:17.333932  498295 certs.go:195] generating shared ca certs ...
	I1002 20:19:17.333951  498295 certs.go:227] acquiring lock for ca certs: {Name:mk99bb18e623cf4cf4a4efda3dab88668aa481a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:17.334133  498295 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key
	I1002 20:19:18.524462  498295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt ...
	I1002 20:19:18.524491  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt: {Name:mkbabc9fb8129565aa17077d36bff0789468fdd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:18.525289  498295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key ...
	I1002 20:19:18.525308  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key: {Name:mkb964d757a0f476b3ffe3f47dc62108ad2607c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:18.525821  498295 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key
	I1002 20:19:18.945057  498295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.crt ...
	I1002 20:19:18.945087  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.crt: {Name:mk086de1eebb30b42cc55ee24fed0de659e37246 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:18.945289  498295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key ...
	I1002 20:19:18.945306  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key: {Name:mk96cacff0a750b97a9d6156bbb666c8563ffd7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:18.945410  498295 certs.go:257] generating profile certs ...
	I1002 20:19:18.945492  498295 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.key
	I1002 20:19:18.945522  498295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt with IP's: []
	I1002 20:19:19.213666  498295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt ...
	I1002 20:19:19.213699  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: {Name:mk41f1db6cee1e38d2a1af4ae16e7d004e26155a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:19.213915  498295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.key ...
	I1002 20:19:19.213932  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.key: {Name:mkc9288189ebbc26ffb45fad55b51c9ae7b8cb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:19.214046  498295 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.key.222fa5fe
	I1002 20:19:19.214075  498295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.crt.222fa5fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220]
	I1002 20:19:19.401800  498295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.crt.222fa5fe ...
	I1002 20:19:19.401839  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.crt.222fa5fe: {Name:mk8a3fe986f82874e3c1ca81860798fe1a281705 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:19.402554  498295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.key.222fa5fe ...
	I1002 20:19:19.402582  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.key.222fa5fe: {Name:mk248a615c0456e444b84928d294d852ec647f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:19.402702  498295 certs.go:382] copying /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.crt.222fa5fe -> /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.crt
	I1002 20:19:19.402847  498295 certs.go:386] copying /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.key.222fa5fe -> /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.key
	I1002 20:19:19.402925  498295 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/proxy-client.key
	I1002 20:19:19.402954  498295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/proxy-client.crt with IP's: []
	I1002 20:19:19.500526  498295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/proxy-client.crt ...
	I1002 20:19:19.500566  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/proxy-client.crt: {Name:mk693ead3ef4b58ac761cfa23147dc274e84c3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:19.500777  498295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/proxy-client.key ...
	I1002 20:19:19.500798  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/proxy-client.key: {Name:mk61b3f7763ae8cf0e4b3af9727c0aa88de37ca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:19.501014  498295 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:19:19.501065  498295 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem (1078 bytes)
	I1002 20:19:19.501098  498295 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:19:19.501133  498295 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem (1675 bytes)
	I1002 20:19:19.501793  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:19:19.533138  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:19:19.560889  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:19:19.588776  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:19:19.616058  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:19:19.643978  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:19:19.671329  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:19:19.699127  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:19:19.730446  498295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:19:19.762014  498295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:19:19.781336  498295 ssh_runner.go:195] Run: openssl version
	I1002 20:19:19.787797  498295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:19:19.802353  498295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:19.808213  498295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:19.808293  498295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:19:19.815348  498295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:19:19.829302  498295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:19:19.835279  498295 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:19:19.835356  498295 kubeadm.go:400] StartCluster: {Name:addons-760875 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-760875 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:19:19.835450  498295 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:19:19.835534  498295 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:19:19.880522  498295 cri.go:89] found id: ""
	I1002 20:19:19.880608  498295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:19:19.893074  498295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:19:19.904916  498295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:19:19.916116  498295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:19:19.916136  498295 kubeadm.go:157] found existing configuration files:
	
	I1002 20:19:19.916192  498295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:19:19.927333  498295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:19:19.927391  498295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:19:19.939160  498295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:19:19.950183  498295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:19:19.950255  498295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:19:19.962314  498295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:19:19.975218  498295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:19:19.975279  498295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:19:19.987329  498295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:19:19.998388  498295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:19:19.998459  498295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:19:20.010269  498295 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 20:19:20.154333  498295 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:19:32.503412  498295 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:19:32.503511  498295 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:19:32.503577  498295 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:19:32.503657  498295 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:19:32.503835  498295 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:19:32.503955  498295 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:19:32.505141  498295 out.go:252]   - Generating certificates and keys ...
	I1002 20:19:32.505224  498295 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:19:32.505300  498295 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:19:32.505404  498295 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:19:32.505483  498295 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:19:32.505585  498295 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:19:32.505640  498295 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:19:32.505717  498295 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:19:32.505897  498295 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-760875 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1002 20:19:32.505975  498295 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:19:32.506110  498295 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-760875 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1002 20:19:32.506233  498295 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:19:32.506332  498295 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:19:32.506400  498295 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:19:32.506484  498295 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:19:32.506561  498295 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:19:32.506649  498295 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:19:32.506751  498295 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:19:32.506850  498295 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:19:32.506901  498295 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:19:32.506966  498295 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:19:32.507021  498295 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:19:32.508119  498295 out.go:252]   - Booting up control plane ...
	I1002 20:19:32.508201  498295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:19:32.508263  498295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:19:32.508331  498295 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:19:32.508442  498295 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:19:32.508541  498295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:19:32.508632  498295 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:19:32.508721  498295 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:19:32.508781  498295 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:19:32.508915  498295 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:19:32.509042  498295 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:19:32.509108  498295 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.831791ms
	I1002 20:19:32.509203  498295 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:19:32.509291  498295 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.220:8443/livez
	I1002 20:19:32.509410  498295 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:19:32.509487  498295 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:19:32.509547  498295 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.529332035s
	I1002 20:19:32.509608  498295 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.448420435s
	I1002 20:19:32.509669  498295 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502493964s
	I1002 20:19:32.509823  498295 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:19:32.509976  498295 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:19:32.510044  498295 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:19:32.510222  498295 kubeadm.go:318] [mark-control-plane] Marking the node addons-760875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:19:32.510308  498295 kubeadm.go:318] [bootstrap-token] Using token: x689k4.8qaem6wou13ehull
	I1002 20:19:32.511325  498295 out.go:252]   - Configuring RBAC rules ...
	I1002 20:19:32.511429  498295 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:19:32.511522  498295 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:19:32.511721  498295 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:19:32.511925  498295 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:19:32.512107  498295 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:19:32.512183  498295 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:19:32.512296  498295 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:19:32.512361  498295 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 20:19:32.512430  498295 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 20:19:32.512444  498295 kubeadm.go:318] 
	I1002 20:19:32.512525  498295 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 20:19:32.512534  498295 kubeadm.go:318] 
	I1002 20:19:32.512624  498295 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 20:19:32.512631  498295 kubeadm.go:318] 
	I1002 20:19:32.512652  498295 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 20:19:32.512722  498295 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:19:32.512779  498295 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:19:32.512788  498295 kubeadm.go:318] 
	I1002 20:19:32.512836  498295 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 20:19:32.512842  498295 kubeadm.go:318] 
	I1002 20:19:32.512881  498295 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:19:32.512887  498295 kubeadm.go:318] 
	I1002 20:19:32.512929  498295 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 20:19:32.512995  498295 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:19:32.513053  498295 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:19:32.513059  498295 kubeadm.go:318] 
	I1002 20:19:32.513166  498295 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:19:32.513258  498295 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 20:19:32.513267  498295 kubeadm.go:318] 
	I1002 20:19:32.513373  498295 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token x689k4.8qaem6wou13ehull \
	I1002 20:19:32.513494  498295 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:32d7d270f6a5dfe43597582240b68ebe9df949068deb05a8c74918e20d720da3 \
	I1002 20:19:32.513520  498295 kubeadm.go:318] 	--control-plane 
	I1002 20:19:32.513524  498295 kubeadm.go:318] 
	I1002 20:19:32.513593  498295 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:19:32.513598  498295 kubeadm.go:318] 
	I1002 20:19:32.513664  498295 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token x689k4.8qaem6wou13ehull \
	I1002 20:19:32.513789  498295 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:32d7d270f6a5dfe43597582240b68ebe9df949068deb05a8c74918e20d720da3 
	I1002 20:19:32.513802  498295 cni.go:84] Creating CNI manager for ""
	I1002 20:19:32.513809  498295 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:19:32.514877  498295 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:19:32.515754  498295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:19:32.529325  498295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 20:19:32.548576  498295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:19:32.548678  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:32.548716  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-760875 minikube.k8s.io/updated_at=2025_10_02T20_19_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=addons-760875 minikube.k8s.io/primary=true
	I1002 20:19:32.690180  498295 ops.go:34] apiserver oom_adj: -16
	I1002 20:19:32.690190  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:33.191253  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:33.691206  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.190536  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:34.690763  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.190837  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:35.690974  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.190671  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.690662  498295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:19:36.780020  498295 kubeadm.go:1113] duration metric: took 4.231409083s to wait for elevateKubeSystemPrivileges
	I1002 20:19:36.780092  498295 kubeadm.go:402] duration metric: took 16.944741693s to StartCluster
	I1002 20:19:36.780122  498295 settings.go:142] acquiring lock: {Name:mk713e1c8098ab4e764fe2cb637b0408c7b1a3ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:36.780275  498295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 20:19:36.780824  498295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/kubeconfig: {Name:mk4bbb10e20496c232fa2a76298e716d67d36cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:19:36.781543  498295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:19:36.781554  498295 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:19:36.781666  498295 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 20:19:36.781835  498295 addons.go:69] Setting yakd=true in profile "addons-760875"
	I1002 20:19:36.781851  498295 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-760875"
	I1002 20:19:36.781860  498295 addons.go:238] Setting addon yakd=true in "addons-760875"
	I1002 20:19:36.781867  498295 addons.go:69] Setting registry=true in profile "addons-760875"
	I1002 20:19:36.781880  498295 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-760875"
	I1002 20:19:36.781887  498295 addons.go:238] Setting addon registry=true in "addons-760875"
	I1002 20:19:36.781912  498295 addons.go:69] Setting volcano=true in profile "addons-760875"
	I1002 20:19:36.781916  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.781899  498295 addons.go:69] Setting storage-provisioner=true in profile "addons-760875"
	I1002 20:19:36.781925  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.781913  498295 addons.go:69] Setting ingress-dns=true in profile "addons-760875"
	I1002 20:19:36.781929  498295 addons.go:69] Setting metrics-server=true in profile "addons-760875"
	I1002 20:19:36.781956  498295 addons.go:238] Setting addon storage-provisioner=true in "addons-760875"
	I1002 20:19:36.781965  498295 addons.go:238] Setting addon ingress-dns=true in "addons-760875"
	I1002 20:19:36.781971  498295 addons.go:238] Setting addon metrics-server=true in "addons-760875"
	I1002 20:19:36.781971  498295 addons.go:69] Setting registry-creds=true in profile "addons-760875"
	I1002 20:19:36.781992  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.781997  498295 addons.go:238] Setting addon registry-creds=true in "addons-760875"
	I1002 20:19:36.782018  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.782035  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.782043  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.781860  498295 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-760875"
	I1002 20:19:36.782295  498295 addons.go:69] Setting volumesnapshots=true in profile "addons-760875"
	I1002 20:19:36.782313  498295 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-760875"
	I1002 20:19:36.782333  498295 addons.go:238] Setting addon volumesnapshots=true in "addons-760875"
	I1002 20:19:36.782341  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.782357  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.781897  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.782460  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.782481  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.782485  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.782511  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.782516  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.782533  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.782541  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.782560  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.782559  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.781921  498295 addons.go:69] Setting inspektor-gadget=true in profile "addons-760875"
	I1002 20:19:36.782653  498295 addons.go:238] Setting addon inspektor-gadget=true in "addons-760875"
	I1002 20:19:36.782674  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.782745  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.782773  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.782779  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.782805  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.782840  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.781837  498295 addons.go:69] Setting default-storageclass=true in profile "addons-760875"
	I1002 20:19:36.782872  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.781926  498295 addons.go:238] Setting addon volcano=true in "addons-760875"
	I1002 20:19:36.782873  498295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-760875"
	I1002 20:19:36.781905  498295 addons.go:69] Setting ingress=true in profile "addons-760875"
	I1002 20:19:36.782904  498295 addons.go:238] Setting addon ingress=true in "addons-760875"
	I1002 20:19:36.782498  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.782920  498295 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-760875"
	I1002 20:19:36.781904  498295 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-760875"
	I1002 20:19:36.782955  498295 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-760875"
	I1002 20:19:36.782961  498295 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-760875"
	I1002 20:19:36.782965  498295 addons.go:69] Setting cloud-spanner=true in profile "addons-760875"
	I1002 20:19:36.781852  498295 config.go:182] Loaded profile config "addons-760875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:36.782974  498295 addons.go:238] Setting addon cloud-spanner=true in "addons-760875"
	I1002 20:19:36.781895  498295 addons.go:69] Setting gcp-auth=true in profile "addons-760875"
	I1002 20:19:36.783063  498295 mustload.go:65] Loading cluster: addons-760875
	I1002 20:19:36.783092  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.783597  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.783852  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.783892  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.784101  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.784188  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.784226  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.784623  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.786040  498295 out.go:179] * Verifying Kubernetes components...
	I1002 20:19:36.787227  498295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:19:36.792011  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.792054  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.792147  498295 config.go:182] Loaded profile config "addons-760875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:19:36.792237  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.792270  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.792510  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.792538  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.796018  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.796066  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.796688  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.796740  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.796752  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.796786  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.797317  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.797356  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.804226  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I1002 20:19:36.810452  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.811048  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.811072  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.811458  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.812156  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.812207  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.814201  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I1002 20:19:36.816876  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40683
	I1002 20:19:36.818594  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.822957  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.822977  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.823140  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I1002 20:19:36.823637  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.824172  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.824188  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.824548  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.825115  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.825183  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.826075  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.826328  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.826918  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.827427  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.827445  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.828003  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.828881  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37101
	I1002 20:19:36.828920  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1002 20:19:36.830058  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.830104  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.831488  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42165
	I1002 20:19:36.831773  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46569
	I1002 20:19:36.831974  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.832259  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I1002 20:19:36.832429  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.832927  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.832951  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.832974  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I1002 20:19:36.833342  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.833763  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.833781  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.833783  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.834463  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.834503  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.834512  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.834543  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.835718  498295 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-760875"
	I1002 20:19:36.835765  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.836155  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.836190  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.842108  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42155
	I1002 20:19:36.842134  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45295
	I1002 20:19:36.842167  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.842174  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.842112  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.842112  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.842123  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.842947  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.843040  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.843057  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.843061  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.843080  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.843428  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.843470  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.843643  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.843695  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.843957  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.844181  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.844201  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.844261  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.844278  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.843648  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.844638  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.844696  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.844974  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I1002 20:19:36.845343  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.845387  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.846224  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.846868  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.846889  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.846954  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.847003  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.847492  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.847524  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.847776  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.848181  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.850973  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I1002 20:19:36.851426  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.851442  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.851967  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.852045  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.852435  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.852465  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.852773  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.852805  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.853005  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.853658  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.853688  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.854984  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.855487  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.855502  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.856284  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.857265  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.857762  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.858545  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I1002 20:19:36.859114  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.859938  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.859964  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.860383  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.860659  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.862630  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I1002 20:19:36.868811  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I1002 20:19:36.868829  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.868852  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36757
	I1002 20:19:36.868814  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I1002 20:19:36.871490  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I1002 20:19:36.871568  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.871610  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.871972  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.872399  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.872419  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.872508  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.872676  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.872691  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.872910  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.872931  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.873132  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.873153  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.873177  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.873210  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.873800  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.873821  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.873890  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.873983  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.874633  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.874896  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.877133  498295 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:19:36.877783  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.878014  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.878512  498295 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:36.878528  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:19:36.878549  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.878583  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.878598  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.879005  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I1002 20:19:36.880058  498295 addons.go:238] Setting addon default-storageclass=true in "addons-760875"
	I1002 20:19:36.880101  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:36.880525  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.880577  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.880810  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.880810  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.881438  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.882290  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.882365  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.882629  498295 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 20:19:36.883741  498295 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:36.883946  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 20:19:36.885193  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.885990  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.886148  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.886230  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I1002 20:19:36.886671  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.886972  498295 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 20:19:36.887444  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.887467  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.887515  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.887544  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.887788  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.887925  498295 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 20:19:36.887938  498295 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 20:19:36.887956  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.888282  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.888353  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.888374  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.888388  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.888418  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39131
	I1002 20:19:36.888968  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.889008  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.888974  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.889201  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.889323  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.891157  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44607
	I1002 20:19:36.891380  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.892302  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.892373  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.892453  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.892481  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.893204  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.893210  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.893230  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.893904  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.894003  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.894283  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.894547  498295 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 20:19:36.894557  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.894700  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.894738  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.894899  498295 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 20:19:36.895001  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.895848  498295 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:36.895870  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 20:19:36.895889  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.896502  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
	I1002 20:19:36.896972  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.897148  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.897161  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.897823  498295 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:36.898065  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.898243  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.898948  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.899003  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.899553  498295 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:36.900567  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.900627  498295 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:36.900638  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 20:19:36.900984  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.901081  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.900652  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.901937  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.902099  498295 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 20:19:36.902181  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.902206  498295 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 20:19:36.903602  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.903864  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.906480  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38201
	I1002 20:19:36.906492  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.906527  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.906682  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46201
	I1002 20:19:36.906741  498295 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 20:19:36.906856  498295 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:36.906868  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 20:19:36.906886  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.907439  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.907592  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.907841  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.907868  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.908028  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.908096  498295 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 20:19:36.908105  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 20:19:36.908119  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.908193  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.908342  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.908382  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.908410  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.908630  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.908565  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.908846  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.909050  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.909293  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.909408  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.909431  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.910187  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.910210  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.910232  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.910649  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.910844  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.911096  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.911255  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.911469  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I1002 20:19:36.911607  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.911739  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.912038  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.912257  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.912696  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.912749  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.913274  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.915465  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33859
	I1002 20:19:36.916086  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.916403  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46461
	I1002 20:19:36.916781  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.916798  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.916907  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.916978  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.917108  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.917156  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.917761  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.917810  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.917899  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.917918  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.917979  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.918032  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.918067  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.918115  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.918213  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.918305  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.918586  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:36.918630  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:36.918724  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.918747  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.918751  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.919728  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.919854  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.919772  498295 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 20:19:36.919990  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.919792  498295 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 20:19:36.920340  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.919809  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.920637  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.921475  498295 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 20:19:36.921501  498295 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 20:19:36.921519  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.921588  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.921602  498295 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:36.921673  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 20:19:36.921724  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.921860  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37741
	I1002 20:19:36.922388  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I1002 20:19:36.924090  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.924473  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.924873  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.924891  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.925504  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.925731  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.926328  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.926837  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.926870  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.927268  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.927480  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.929176  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I1002 20:19:36.929459  498295 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 20:19:36.929562  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42223
	I1002 20:19:36.929693  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.930240  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.930256  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.930491  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.930573  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.930497  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.930745  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.931109  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:36.931128  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:36.931129  498295 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:36.931139  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 20:19:36.931162  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.931940  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.931946  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:36.932002  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:36.932016  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:36.932023  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:36.932032  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:36.932091  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.932105  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.932129  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.932143  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.932152  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.932158  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.932357  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.932494  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.932626  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:36.932668  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:36.932679  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 20:19:36.932775  498295 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 20:19:36.933057  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.933500  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.933593  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.933671  498295 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 20:19:36.933898  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.933977  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.934032  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.934495  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.935207  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.935505  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.935750  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.936153  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.936563  498295 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 20:19:36.937231  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.937660  498295 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 20:19:36.937766  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.938262  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.938292  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.938539  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.938661  498295 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 20:19:36.938678  498295 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 20:19:36.938694  498295 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 20:19:36.938717  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.938800  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.938758  498295 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 20:19:36.939080  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.939274  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.939810  498295 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 20:19:36.939829  498295 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 20:19:36.939856  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.940528  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1002 20:19:36.940664  498295 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 20:19:36.941249  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.941772  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.941797  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.942139  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.942319  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.942363  498295 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 20:19:36.943224  498295 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 20:19:36.943427  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.944138  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.944168  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.944326  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.944392  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.944603  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.944745  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.944935  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.945079  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.945136  498295 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 20:19:36.945403  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.945432  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.945892  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I1002 20:19:36.945892  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.946055  498295 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 20:19:36.946079  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.946267  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.946288  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:36.946421  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.946770  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:36.946793  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:36.946881  498295 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 20:19:36.947177  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:36.947371  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:36.947745  498295 out.go:179]   - Using image docker.io/busybox:stable
	I1002 20:19:36.947746  498295 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 20:19:36.947822  498295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 20:19:36.947845  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.948731  498295 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:36.948749  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 20:19:36.948766  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.949471  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:36.949753  498295 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:36.949768  498295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:19:36.949784  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:36.952320  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.952903  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.952930  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.953105  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.953299  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.953475  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.953571  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.953645  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.954148  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.954162  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.954199  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.954386  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.954698  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.954792  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:36.954836  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:36.955060  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.955408  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:36.955632  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:36.955872  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:36.956156  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:36.956530  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	W1002 20:19:37.132487  498295 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36656->192.168.39.220:22: read: connection reset by peer
	I1002 20:19:37.132529  498295 retry.go:31] will retry after 297.339881ms: ssh: handshake failed: read tcp 192.168.39.1:36656->192.168.39.220:22: read: connection reset by peer
	W1002 20:19:37.132619  498295 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36676->192.168.39.220:22: read: connection reset by peer
	I1002 20:19:37.132631  498295 retry.go:31] will retry after 352.34138ms: ssh: handshake failed: read tcp 192.168.39.1:36676->192.168.39.220:22: read: connection reset by peer
	W1002 20:19:37.132668  498295 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36672->192.168.39.220:22: read: connection reset by peer
	I1002 20:19:37.132689  498295 retry.go:31] will retry after 223.495135ms: ssh: handshake failed: read tcp 192.168.39.1:36672->192.168.39.220:22: read: connection reset by peer
	W1002 20:19:37.132804  498295 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36686->192.168.39.220:22: read: connection reset by peer
	I1002 20:19:37.132817  498295 retry.go:31] will retry after 297.835381ms: ssh: handshake failed: read tcp 192.168.39.1:36686->192.168.39.220:22: read: connection reset by peer
	I1002 20:19:37.274198  498295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:19:37.274246  498295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 20:19:37.307147  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 20:19:37.309187  498295 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 20:19:37.309213  498295 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 20:19:37.422903  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 20:19:37.428860  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 20:19:37.435880  498295 node_ready.go:35] waiting up to 6m0s for node "addons-760875" to be "Ready" ...
	I1002 20:19:37.441244  498295 node_ready.go:49] node "addons-760875" is "Ready"
	I1002 20:19:37.441268  498295 node_ready.go:38] duration metric: took 5.365871ms for node "addons-760875" to be "Ready" ...
	I1002 20:19:37.441282  498295 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:19:37.441319  498295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:37.446692  498295 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:37.446717  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 20:19:37.500739  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 20:19:37.506972  498295 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 20:19:37.506995  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 20:19:37.515854  498295 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 20:19:37.515879  498295 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 20:19:37.517876  498295 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:37.517891  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 20:19:37.520109  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 20:19:37.534624  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 20:19:37.685767  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:19:37.782350  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:37.975781  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 20:19:38.000525  498295 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 20:19:38.000551  498295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 20:19:38.014619  498295 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 20:19:38.014642  498295 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 20:19:38.014995  498295 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 20:19:38.015013  498295 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 20:19:38.116381  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:19:38.338469  498295 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 20:19:38.338494  498295 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 20:19:38.431776  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 20:19:38.613973  498295 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:38.613999  498295 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 20:19:38.619362  498295 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 20:19:38.619381  498295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 20:19:38.630373  498295 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 20:19:38.630391  498295 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 20:19:38.789311  498295 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 20:19:38.789341  498295 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 20:19:39.004552  498295 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 20:19:39.004582  498295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 20:19:39.008349  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 20:19:39.008408  498295 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.008432  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 20:19:39.026071  498295 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 20:19:39.026099  498295 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 20:19:39.285615  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 20:19:39.289906  498295 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 20:19:39.289934  498295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 20:19:39.312954  498295 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 20:19:39.312985  498295 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 20:19:39.791489  498295 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:39.791521  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 20:19:39.816926  498295 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 20:19:39.816960  498295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 20:19:39.956843  498295 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 20:19:39.956870  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 20:19:40.027695  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:40.242884  498295 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.968605683s)
	I1002 20:19:40.242932  498295 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 20:19:40.364508  498295 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 20:19:40.364537  498295 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 20:19:40.752899  498295 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-760875" context rescaled to 1 replicas
	I1002 20:19:40.887765  498295 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 20:19:40.887797  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 20:19:41.070509  498295 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 20:19:41.070537  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 20:19:41.254897  498295 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:19:41.254936  498295 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 20:19:41.482298  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 20:19:44.331545  498295 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 20:19:44.331586  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:44.335333  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:44.335930  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:44.335968  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:44.336196  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:44.336414  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:44.336570  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:44.336742  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:44.436828  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.129627252s)
	I1002 20:19:44.436886  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.436898  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.436892  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.013953713s)
	I1002 20:19:44.436946  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.436962  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.436964  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.008069324s)
	I1002 20:19:44.436973  498295 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.995640612s)
	I1002 20:19:44.436989  498295 api_server.go:72] duration metric: took 7.655409085s to wait for apiserver process to appear ...
	I1002 20:19:44.437001  498295 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:19:44.437021  498295 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1002 20:19:44.437059  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.916928181s)
	I1002 20:19:44.437080  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437093  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.436989  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437121  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437021  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.936257859s)
	I1002 20:19:44.437170  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.902510959s)
	I1002 20:19:44.437189  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437196  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437207  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437219  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437243  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.437269  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.437275  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.437277  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.437299  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.437308  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437282  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437337  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.437369  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.437384  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.437389  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.751587367s)
	I1002 20:19:44.437422  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437431  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437434  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437395  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437481  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437481  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.655099038s)
	W1002 20:19:44.437503  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.437521  498295 retry.go:31] will retry after 228.733497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:44.437571  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.461758956s)
	I1002 20:19:44.437597  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.437609  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437615  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437613  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.321209226s)
	I1002 20:19:44.437661  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.005861038s)
	I1002 20:19:44.437667  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437674  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437680  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437695  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.437726  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437730  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.437316  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437761  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.429390472s)
	I1002 20:19:44.437767  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.437776  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.437778  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437783  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437788  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437788  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.437752  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.437816  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.437829  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437838  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437861  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.152221226s)
	I1002 20:19:44.437791  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437876  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.437884  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.437797  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.437903  498295 addons.go:479] Verifying addon ingress=true in "addons-760875"
	I1002 20:19:44.438766  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.438777  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.438987  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.439028  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.439037  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.439046  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.439054  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.439129  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.439154  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.439162  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.439876  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.439938  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.439969  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.439986  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.440021  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.440051  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.440058  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.440065  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.440071  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.440132  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.440143  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.440628  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.440647  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.441017  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.441072  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.441078  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.441085  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.441083  498295 out.go:179] * Verifying ingress addon...
	I1002 20:19:44.441091  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.441206  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.441233  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.441240  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.443051  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.443082  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.443092  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.443100  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.443173  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.443201  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.443208  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.443217  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.443224  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.443395  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.443430  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.443485  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.443496  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.443502  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.443656  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.439966  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.443690  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.443761  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.443769  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.443781  498295 addons.go:479] Verifying addon metrics-server=true in "addons-760875"
	I1002 20:19:44.443787  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.443794  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.444012  498295 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 20:19:44.444278  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.444290  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.444299  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.444307  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.444376  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.444384  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.444716  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.444736  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.444745  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.445612  498295 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-760875 service yakd-dashboard -n yakd-dashboard
	
	I1002 20:19:44.447662  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.447671  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.447685  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.447695  498295 addons.go:479] Verifying addon registry=true in "addons-760875"
	I1002 20:19:44.449180  498295 out.go:179] * Verifying registry addon...
	I1002 20:19:44.450638  498295 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 20:19:44.462528  498295 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1002 20:19:44.470440  498295 api_server.go:141] control plane version: v1.34.1
	I1002 20:19:44.470468  498295 api_server.go:131] duration metric: took 33.457862ms to wait for apiserver health ...
	I1002 20:19:44.470481  498295 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:19:44.488696  498295 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 20:19:44.488732  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:44.488753  498295 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 20:19:44.488762  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:44.504304  498295 system_pods.go:59] 15 kube-system pods found
	I1002 20:19:44.504345  498295 system_pods.go:61] "amd-gpu-device-plugin-6fptp" [58f251e5-b493-4a24-803f-74575247bd51] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 20:19:44.504357  498295 system_pods.go:61] "coredns-66bc5c9577-gzgzk" [9b0b79c3-ed77-46f1-a9b2-fbf5a3243b0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:19:44.504367  498295 system_pods.go:61] "coredns-66bc5c9577-t6k2m" [e2f42852-82cf-4a0a-b8e2-c84b4392aff8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:19:44.504376  498295 system_pods.go:61] "etcd-addons-760875" [40b892e1-f994-4f83-a0c1-f5d254599a46] Running
	I1002 20:19:44.504383  498295 system_pods.go:61] "kube-apiserver-addons-760875" [96e039b4-b765-4aec-b803-fa19bea1543b] Running
	I1002 20:19:44.504393  498295 system_pods.go:61] "kube-controller-manager-addons-760875" [54c6d1a6-5337-4af7-8a38-c3dcb9ffa416] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:19:44.504413  498295 system_pods.go:61] "kube-ingress-dns-minikube" [544a1f20-492a-44ee-96d1-f6b8375a80d0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:19:44.504420  498295 system_pods.go:61] "kube-proxy-lghgd" [17f6e9f1-4463-4760-b9de-cdbe4720ab23] Running
	I1002 20:19:44.504426  498295 system_pods.go:61] "kube-scheduler-addons-760875" [303a6bae-7921-424b-94cf-afc9750a9f57] Running
	I1002 20:19:44.504434  498295 system_pods.go:61] "metrics-server-85b7d694d7-5n4lk" [7217fcab-2e35-4e35-8955-9287e23137f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:19:44.504448  498295 system_pods.go:61] "nvidia-device-plugin-daemonset-fvbmg" [44467d83-4766-45cb-a8b3-8ed6ef1292e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:19:44.504460  498295 system_pods.go:61] "registry-66898fdd98-ntfh4" [c74ad645-ae4b-4223-925f-d29c9be1982d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:19:44.504470  498295 system_pods.go:61] "registry-creds-764b6fb674-hhx89" [32ac2209-8a5f-4769-a9fb-b7537b630416] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:19:44.504481  498295 system_pods.go:61] "registry-proxy-2d9m2" [f52e5ce2-9dbc-4f7a-a552-2a8d00f23cf7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:19:44.504490  498295 system_pods.go:61] "storage-provisioner" [bcd84c2f-23ba-439f-8d70-e952ae36b801] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 20:19:44.504510  498295 system_pods.go:74] duration metric: took 34.020562ms to wait for pod list to return data ...
	I1002 20:19:44.504524  498295 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:19:44.507692  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.507720  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.508007  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.508025  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 20:19:44.508113  498295 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 20:19:44.525814  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:44.525837  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:44.526116  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:44.526134  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:44.526162  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:44.531575  498295 default_sa.go:45] found service account: "default"
	I1002 20:19:44.531594  498295 default_sa.go:55] duration metric: took 27.062115ms for default service account to be created ...
	I1002 20:19:44.531605  498295 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:19:44.556398  498295 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 20:19:44.598351  498295 addons.go:238] Setting addon gcp-auth=true in "addons-760875"
	I1002 20:19:44.598421  498295 host.go:66] Checking if "addons-760875" exists ...
	I1002 20:19:44.598803  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:44.598840  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:44.599064  498295 system_pods.go:86] 15 kube-system pods found
	I1002 20:19:44.599110  498295 system_pods.go:89] "amd-gpu-device-plugin-6fptp" [58f251e5-b493-4a24-803f-74575247bd51] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 20:19:44.599127  498295 system_pods.go:89] "coredns-66bc5c9577-gzgzk" [9b0b79c3-ed77-46f1-a9b2-fbf5a3243b0f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:19:44.599146  498295 system_pods.go:89] "coredns-66bc5c9577-t6k2m" [e2f42852-82cf-4a0a-b8e2-c84b4392aff8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:19:44.599155  498295 system_pods.go:89] "etcd-addons-760875" [40b892e1-f994-4f83-a0c1-f5d254599a46] Running
	I1002 20:19:44.599166  498295 system_pods.go:89] "kube-apiserver-addons-760875" [96e039b4-b765-4aec-b803-fa19bea1543b] Running
	I1002 20:19:44.599182  498295 system_pods.go:89] "kube-controller-manager-addons-760875" [54c6d1a6-5337-4af7-8a38-c3dcb9ffa416] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:19:44.599195  498295 system_pods.go:89] "kube-ingress-dns-minikube" [544a1f20-492a-44ee-96d1-f6b8375a80d0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 20:19:44.599205  498295 system_pods.go:89] "kube-proxy-lghgd" [17f6e9f1-4463-4760-b9de-cdbe4720ab23] Running
	I1002 20:19:44.599212  498295 system_pods.go:89] "kube-scheduler-addons-760875" [303a6bae-7921-424b-94cf-afc9750a9f57] Running
	I1002 20:19:44.599223  498295 system_pods.go:89] "metrics-server-85b7d694d7-5n4lk" [7217fcab-2e35-4e35-8955-9287e23137f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:19:44.599230  498295 system_pods.go:89] "nvidia-device-plugin-daemonset-fvbmg" [44467d83-4766-45cb-a8b3-8ed6ef1292e6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 20:19:44.599252  498295 system_pods.go:89] "registry-66898fdd98-ntfh4" [c74ad645-ae4b-4223-925f-d29c9be1982d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 20:19:44.599265  498295 system_pods.go:89] "registry-creds-764b6fb674-hhx89" [32ac2209-8a5f-4769-a9fb-b7537b630416] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 20:19:44.599278  498295 system_pods.go:89] "registry-proxy-2d9m2" [f52e5ce2-9dbc-4f7a-a552-2a8d00f23cf7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 20:19:44.599291  498295 system_pods.go:89] "storage-provisioner" [bcd84c2f-23ba-439f-8d70-e952ae36b801] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 20:19:44.599307  498295 system_pods.go:126] duration metric: took 67.694446ms to wait for k8s-apps to be running ...
	I1002 20:19:44.599326  498295 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:19:44.599393  498295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:19:44.614565  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35795
	I1002 20:19:44.615179  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:44.615756  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:44.615791  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:44.616190  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:44.616648  498295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:19:44.616679  498295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:19:44.631748  498295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1002 20:19:44.632183  498295 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:19:44.632598  498295 main.go:141] libmachine: Using API Version  1
	I1002 20:19:44.632617  498295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:19:44.633029  498295 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:19:44.633246  498295 main.go:141] libmachine: (addons-760875) Calling .GetState
	I1002 20:19:44.635214  498295 main.go:141] libmachine: (addons-760875) Calling .DriverName
	I1002 20:19:44.635447  498295 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 20:19:44.635474  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHHostname
	I1002 20:19:44.638588  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:44.639137  498295 main.go:141] libmachine: (addons-760875) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:19:bc", ip: ""} in network mk-addons-760875: {Iface:virbr1 ExpiryTime:2025-10-02 21:19:09 +0000 UTC Type:0 Mac:52:54:00:75:19:bc Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:addons-760875 Clientid:01:52:54:00:75:19:bc}
	I1002 20:19:44.639171  498295 main.go:141] libmachine: (addons-760875) DBG | domain addons-760875 has defined IP address 192.168.39.220 and MAC address 52:54:00:75:19:bc in network mk-addons-760875
	I1002 20:19:44.639335  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHPort
	I1002 20:19:44.639510  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHKeyPath
	I1002 20:19:44.639669  498295 main.go:141] libmachine: (addons-760875) Calling .GetSSHUsername
	I1002 20:19:44.639843  498295 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/addons-760875/id_rsa Username:docker}
	I1002 20:19:44.666547  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:44.956832  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:44.975007  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.013515  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.985736458s)
	W1002 20:19:45.013575  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:45.013604  498295 retry.go:31] will retry after 182.126244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 20:19:45.196065  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 20:19:45.487531  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:45.487700  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.643865  498295 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.044443529s)
	I1002 20:19:45.643901  498295 system_svc.go:56] duration metric: took 1.044572791s WaitForService to wait for kubelet
	I1002 20:19:45.643913  498295 kubeadm.go:586] duration metric: took 8.862331381s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:19:45.643934  498295 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:19:45.643867  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.161500951s)
	I1002 20:19:45.644035  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:45.644051  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:45.644354  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:45.644384  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:45.644394  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:45.644406  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:45.644422  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:45.644696  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:45.644730  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:45.644743  498295 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-760875"
	I1002 20:19:45.646752  498295 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 20:19:45.648524  498295 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 20:19:45.672592  498295 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 20:19:45.672663  498295 node_conditions.go:123] node cpu capacity is 2
	I1002 20:19:45.672684  498295 node_conditions.go:105] duration metric: took 28.743767ms to run NodePressure ...
	I1002 20:19:45.672719  498295 start.go:241] waiting for startup goroutines ...
	I1002 20:19:45.694191  498295 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 20:19:45.694216  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:45.952750  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:45.957586  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:46.159936  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.449409  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.457563  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:46.656015  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:46.954440  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:46.960466  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.155234  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.244451  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.577860875s)
	W1002 20:19:47.244512  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:47.244525  498295 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.609049939s)
	I1002 20:19:47.244542  498295 retry.go:31] will retry after 489.950749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:47.244685  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.048529867s)
	I1002 20:19:47.244771  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:47.244792  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:47.245070  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:47.245095  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:47.245110  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:47.245128  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:47.245178  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:47.245471  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:47.245492  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:47.246027  498295 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 20:19:47.247089  498295 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 20:19:47.248038  498295 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 20:19:47.248055  498295 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 20:19:47.294610  498295 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 20:19:47.294638  498295 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 20:19:47.341166  498295 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:47.341204  498295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 20:19:47.373528  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 20:19:47.453739  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.464128  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:47.655836  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:47.734848  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:47.949514  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:47.959413  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:48.158076  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.462094  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.088523225s)
	I1002 20:19:48.462152  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:48.462170  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:48.462534  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:48.462555  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:48.462580  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	I1002 20:19:48.462640  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:19:48.462652  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:19:48.462926  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:19:48.462947  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:19:48.463814  498295 addons.go:479] Verifying addon gcp-auth=true in "addons-760875"
	I1002 20:19:48.464955  498295 out.go:179] * Verifying gcp-auth addon...
	I1002 20:19:48.466553  498295 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 20:19:48.489114  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:48.489244  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.491970  498295 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 20:19:48.491993  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:48.654853  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:48.954037  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:48.956256  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:48.973125  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:49.155519  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.399283  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.66436682s)
	W1002 20:19:49.399328  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.399364  498295 retry.go:31] will retry after 312.187218ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:49.448338  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.454880  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:49.469421  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:49.653947  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:49.711869  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:49.950282  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:49.957696  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:49.975905  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.154075  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.448801  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.454308  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.472657  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:50.654077  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:50.900055  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.188113913s)
	W1002 20:19:50.900100  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:50.900123  498295 retry.go:31] will retry after 915.906534ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:50.952117  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:50.953833  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:50.971171  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:51.158058  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.449166  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.453196  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.472195  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:51.652617  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:51.816839  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:51.950655  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:51.955877  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:51.974627  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:52.154060  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.448692  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.455214  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.471218  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:52.653785  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:52.826380  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.00948713s)
	W1002 20:19:52.826434  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:52.826460  498295 retry.go:31] will retry after 961.671263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:52.947622  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:52.953574  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:52.971092  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:53.152225  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.448251  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.453660  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.469241  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:53.654524  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:53.788664  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:53.949798  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:53.954957  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:53.969668  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:54.154876  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.447115  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.453988  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.471016  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:54.652737  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:54.828697  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.039985882s)
	W1002 20:19:54.828770  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:54.828798  498295 retry.go:31] will retry after 1.95760013s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:54.948154  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:54.953782  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:54.970056  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.154152  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.447942  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.454102  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.470927  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:55.661223  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:55.949716  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:55.954217  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:55.971097  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:56.167876  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.449916  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.454299  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.473491  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:56.653745  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:56.786624  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:19:56.949170  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:56.954067  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:56.969700  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:57.153457  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.447391  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.453179  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.469445  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:19:57.566023  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:57.566063  498295 retry.go:31] will retry after 4.195128704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:19:57.652619  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:57.948920  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:57.953465  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:57.969725  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:58.152960  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.449472  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.453863  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.612540  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:58.652264  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:58.947727  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:58.955831  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:58.970757  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:59.152236  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.448886  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.456431  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.470117  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:19:59.652867  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:19:59.949123  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:19:59.954268  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:19:59.969724  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.153623  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.780328  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.783972  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:00.784303  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.784321  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:00.949409  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:00.955437  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:00.972074  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:01.152290  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.447409  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.453447  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.470722  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:01.652621  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:01.761747  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:01.951208  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:01.954535  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:01.974113  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.154523  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.447593  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.454234  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 20:20:02.454401  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:02.454439  498295 retry.go:31] will retry after 5.27842623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:02.470032  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:02.652939  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:02.948010  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:02.953042  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:02.969106  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.153415  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.447521  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.453523  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.469525  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:03.651523  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:03.948272  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:03.954654  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:03.973181  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:04.152880  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.448457  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.455037  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.469794  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:04.658111  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:04.947848  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:04.953791  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:04.969146  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.152953  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:05.447137  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.453181  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.470837  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:05.652485  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:05.947930  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:05.953142  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:05.969334  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:06.152699  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.447891  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.453341  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.469767  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:06.653168  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:06.949058  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:06.953605  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:06.969520  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.151906  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.448818  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.453251  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.470014  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:07.655302  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:07.733382  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:07.948012  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:07.958142  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:07.977355  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:08.153384  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.449261  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.454722  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.469814  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:20:08.583935  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:08.583980  498295 retry.go:31] will retry after 7.466381972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:08.652521  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:08.948119  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:08.954107  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:08.970074  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.154470  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.447998  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.454091  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.469639  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:09.651978  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:09.947376  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:09.953654  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:09.970787  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:10.152987  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:10.447001  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:10.454303  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:10.471327  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:10.653252  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:11.139942  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.140514  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.140846  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.152516  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:11.447660  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.454597  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.469836  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:11.652558  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:11.947661  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:11.954650  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:11.969731  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.153034  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.448081  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.455722  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.471461  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:12.652543  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:12.949471  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:12.953691  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:12.972513  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:13.153003  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.448093  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.454087  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.469738  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:13.654685  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:13.948169  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:13.953499  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:13.969511  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.153256  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.448621  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.455488  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.470837  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:14.652566  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:14.948542  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:14.953617  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:14.969105  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:15.152211  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.447499  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.454481  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.473028  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:15.660042  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:15.948483  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:15.953632  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:15.968823  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.050934  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:16.157093  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:16.456459  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.463072  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.470321  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:16.658310  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 20:20:16.907321  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:16.907382  498295 retry.go:31] will retry after 9.18225604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:16.950673  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:16.957791  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:16.969895  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.152252  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.448078  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.453278  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.469429  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:17.652947  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:17.953631  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:17.961132  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:17.973859  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:18.152901  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.448680  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.454382  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.471566  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:18.652521  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:18.949609  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:18.955196  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:18.971613  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.153834  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.449780  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.454820  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.471686  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:19.653116  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:19.948449  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:19.955906  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:19.969825  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.153995  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:20.448877  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:20.454292  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:20.471507  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:20.655883  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.032608  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.032608  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.032859  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:21.153972  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.449009  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.453559  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.470253  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:21.653095  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:21.948035  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:21.953777  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:21.972198  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.152949  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.448548  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.454087  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.469494  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:22.652313  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:22.947920  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:22.953427  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:22.970254  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.153108  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.447947  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.453558  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.469947  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:23.656178  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:23.947234  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:23.953930  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:23.970451  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.155145  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.448319  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.459178  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.470951  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:24.652975  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:24.947181  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:24.953631  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:24.969192  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.153365  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.448271  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.453608  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.468890  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:25.652590  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:25.947977  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:25.953028  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:25.969668  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.089902  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:26.155274  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.647494  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:26.647625  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.647793  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.652067  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:26.948880  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:26.955698  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:26.971211  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.154183  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.205353  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.115401897s)
	W1002 20:20:27.205409  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:27.205443  498295 retry.go:31] will retry after 17.692336081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:27.450327  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.454091  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.473632  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:27.652785  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:27.947973  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:27.957078  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:27.972681  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.155326  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.447613  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.455602  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.470616  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:28.654302  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:28.948331  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:28.953831  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:28.971310  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.154992  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.450152  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.453495  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.469904  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:29.653091  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:29.947329  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:29.954756  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:29.970648  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.152570  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:30.448495  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:30.455126  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:30.471281  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:30.654263  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.089111  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.089367  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.089377  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.154846  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.448673  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.455329  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.472526  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:31.653015  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:31.949195  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:31.953266  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:31.971040  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.153901  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.449828  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.454817  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.473229  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:32.654735  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:32.952950  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:32.957531  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:32.974239  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.155301  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.452080  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.491251  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:33.491603  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.654522  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:33.950037  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:33.955722  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:33.969727  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.153689  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.451075  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.455050  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.470136  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:34.653829  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:34.950123  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:34.953189  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:34.969501  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.151901  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.448121  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.453635  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.471190  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:35.652869  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:35.947577  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:35.953863  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:35.969240  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.152514  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.447587  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.454044  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.469565  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:36.652087  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:36.947524  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:36.954095  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:36.970111  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.152317  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.449568  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.455067  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.470503  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:37.652350  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:37.947953  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:37.958235  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:37.970222  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.153140  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.447928  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.453435  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.469573  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:38.653520  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:38.948333  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:38.953731  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:38.969013  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.152506  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.447417  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.453980  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.469560  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:39.655334  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:39.948891  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:39.953546  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:39.970822  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.152693  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.448242  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.453792  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.469396  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:40.653870  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:40.950429  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:40.955431  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:40.973204  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.152850  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.448436  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.453749  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.469546  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:41.653009  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:41.948098  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:41.955858  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:41.971421  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.153140  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.447873  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.454372  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 20:20:42.470399  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:42.653366  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:42.947953  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:42.953722  498295 kapi.go:107] duration metric: took 58.503062524s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 20:20:42.968634  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.151907  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.447195  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.469783  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:43.654265  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:43.947768  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:43.970043  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.152457  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.447657  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.471160  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:44.652641  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:44.898911  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:20:44.950117  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:44.970276  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.156239  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.448410  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.471577  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:45.653947  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:45.905073  498295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.006113486s)
	W1002 20:20:45.905130  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:45.905159  498295 retry.go:31] will retry after 30.191084895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:20:45.946533  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:45.971736  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.156895  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.448622  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.473078  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:46.654319  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:46.952399  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:46.973867  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.152454  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.448670  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.471482  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:47.652391  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:47.947629  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:47.972262  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.153839  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.450862  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.470235  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:48.653198  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:48.947277  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:48.970208  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.154902  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.448402  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:49.470180  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:49.653181  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:49.949633  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.051792  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.155141  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.448107  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.471575  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:50.653613  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:50.948578  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:50.971460  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.152525  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.448025  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.471164  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:51.652743  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:51.967667  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:51.970382  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.158761  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.451897  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.470822  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:52.653809  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:52.950497  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:52.974198  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.154750  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.448223  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.470432  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:53.654218  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:53.948859  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:53.979552  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.152473  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.449109  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.470451  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:54.658466  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:54.948490  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:54.970178  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.155310  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.448189  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.470830  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:55.656347  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:55.948882  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:55.969254  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.153521  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.448179  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.469994  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:56.653392  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:56.950521  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:56.971368  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.153375  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.448046  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.469261  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:57.652251  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:57.949847  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:57.975313  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.152573  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.452096  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.470665  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:58.654534  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:58.948389  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:58.972855  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.156700  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.448826  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.471765  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:20:59.654697  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:20:59.952518  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:20:59.970975  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.152883  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.448406  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.470165  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:00.652790  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:00.948804  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:00.970558  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.152657  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.448174  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.470153  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:01.654636  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:01.948364  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:01.974816  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.154128  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.448257  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.470582  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:02.651950  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:02.949034  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:02.971273  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.157948  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.447510  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.471251  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:03.652649  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:03.948317  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:03.972301  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.152981  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.448073  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.470827  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:04.654764  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:04.948782  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:04.970661  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.152816  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.448109  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.470369  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:05.652808  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:05.948355  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:05.970041  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.153128  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 20:21:06.448123  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.470350  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:06.652949  498295 kapi.go:107] duration metric: took 1m21.004421438s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 20:21:06.947266  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:06.969902  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.447765  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.468967  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:07.947305  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:07.969851  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.447295  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.469794  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:08.947237  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:08.971320  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.448731  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.469576  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:09.948375  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:09.970241  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.448455  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.470332  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:10.948418  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:10.970069  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.448254  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.469759  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:11.947846  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:11.970096  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.447678  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.468791  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:12.947350  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:12.970599  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.448369  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.469946  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:13.947498  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:13.971051  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.447477  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.469983  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:14.948065  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:14.969822  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.447262  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.469778  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:15.947429  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:15.970137  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:16.097420  498295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 20:21:16.448461  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.472254  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 20:21:16.753324  498295 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:21:16.753423  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:21:16.753441  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:21:16.753777  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:21:16.753797  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:21:16.753807  498295 main.go:141] libmachine: Making call to close driver server
	I1002 20:21:16.753815  498295 main.go:141] libmachine: (addons-760875) Calling .Close
	I1002 20:21:16.754083  498295 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:21:16.754105  498295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:21:16.754107  498295 main.go:141] libmachine: (addons-760875) DBG | Closing plugin on server side
	W1002 20:21:16.754239  498295 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:21:16.948907  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:16.969176  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.448122  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.469810  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:17.947360  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:17.969753  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.448169  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.469675  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:18.947001  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:18.969641  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.448383  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.470138  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:19.947661  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:19.971266  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.448281  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.470256  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:20.949297  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:20.970299  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.448312  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:21.470100  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:21.948356  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:21.971607  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.448800  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:22.468879  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:22.947613  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:22.970258  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.447425  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:23.470211  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:23.948057  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:23.970930  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.449210  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:24.469732  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:24.947020  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:24.968969  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.447552  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:25.470394  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:25.948033  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:25.970178  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.448350  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:26.469920  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:26.947916  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:26.969577  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.448181  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:27.469392  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:27.947832  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:27.968936  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.448614  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:28.470509  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:28.948020  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:28.970799  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:29.448787  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:29.470202  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:29.947938  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:29.969741  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:30.448015  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:30.469382  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:30.948293  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:30.969890  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:31.447551  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:31.470626  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:31.948044  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:31.969557  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:32.448835  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:32.470029  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:32.947608  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:32.970292  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:33.448257  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:33.469863  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:33.947330  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:33.970934  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:34.448590  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:34.470040  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:34.947921  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:34.969289  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:35.448359  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:35.469784  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:35.947349  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:35.969599  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:36.447876  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:36.469410  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:36.948287  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:36.969587  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:37.447046  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:37.469210  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:37.947597  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:37.970411  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:38.448498  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:38.469825  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:38.948324  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:38.970809  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:39.448769  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:39.469291  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:39.948154  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:39.969945  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:40.448433  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:40.470859  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:40.947262  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:40.969453  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:41.448196  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:41.469678  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:41.949182  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:41.970235  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:42.448590  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:42.470418  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:42.948555  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:42.969911  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:43.447240  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:43.469467  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:43.948164  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:43.970900  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:44.448276  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:44.469770  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:44.948242  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:44.970164  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:45.447943  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:45.469632  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:45.947371  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:45.969676  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:46.447289  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:46.470150  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:46.948622  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:46.970205  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:47.448319  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:47.470056  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:47.947837  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:47.969104  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:48.448050  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:48.469436  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:48.948321  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:48.971036  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:49.448237  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:49.470282  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:49.948348  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:49.970272  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:50.448849  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:50.469470  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:50.948091  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:50.969365  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:51.448076  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:51.469624  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:51.947911  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:51.969667  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:52.447670  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:52.470341  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:52.948229  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:52.970245  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:53.448747  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:53.468904  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:53.947868  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:53.969806  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:54.448099  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:54.469464  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:54.947871  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:54.969184  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:55.448107  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:55.469627  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:55.948864  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:55.969311  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:56.448326  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:56.469741  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:56.947194  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:56.969283  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:57.447892  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:57.468951  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:57.947829  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:57.969084  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:58.448329  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:58.470471  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:58.948160  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:58.972046  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:59.447985  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:59.469451  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:21:59.947922  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:21:59.969778  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:00.447875  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:00.469438  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:00.950185  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:00.969836  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:01.447268  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:01.470192  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:01.948572  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:01.970166  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:02.448334  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:02.469505  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:02.951048  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:02.970146  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:03.448332  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:03.469625  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:03.950037  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:03.970776  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:04.452846  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:04.471046  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:04.948346  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:04.972398  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:05.449725  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:05.470061  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:05.950575  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:05.970343  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:06.448375  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:06.470791  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:06.947683  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:06.971047  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:07.447904  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:07.471691  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:07.950447  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:07.972680  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:08.449533  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:08.470316  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:08.954245  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:08.973786  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:09.749164  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:09.749292  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:09.950407  498295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 20:22:10.050148  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:10.448359  498295 kapi.go:107] duration metric: took 2m26.004342022s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 20:22:10.469998  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:10.970105  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:11.469876  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:11.972100  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:12.473048  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:12.972980  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:13.471382  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:13.972713  498295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 20:22:14.471182  498295 kapi.go:107] duration metric: took 2m26.004626074s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 20:22:14.472268  498295 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-760875 cluster.
	I1002 20:22:14.473095  498295 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 20:22:14.473883  498295 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 20:22:14.474727  498295 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, registry-creds, metrics-server, amd-gpu-device-plugin, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1002 20:22:14.475505  498295 addons.go:514] duration metric: took 2m37.693848605s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns registry-creds metrics-server amd-gpu-device-plugin yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1002 20:22:14.475544  498295 start.go:246] waiting for cluster config update ...
	I1002 20:22:14.475562  498295 start.go:255] writing updated cluster config ...
	I1002 20:22:14.475858  498295 ssh_runner.go:195] Run: rm -f paused
	I1002 20:22:14.483435  498295 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:22:14.487476  498295 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t6k2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:14.494104  498295 pod_ready.go:94] pod "coredns-66bc5c9577-t6k2m" is "Ready"
	I1002 20:22:14.494128  498295 pod_ready.go:86] duration metric: took 6.629744ms for pod "coredns-66bc5c9577-t6k2m" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:14.496611  498295 pod_ready.go:83] waiting for pod "etcd-addons-760875" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:14.500748  498295 pod_ready.go:94] pod "etcd-addons-760875" is "Ready"
	I1002 20:22:14.500768  498295 pod_ready.go:86] duration metric: took 4.139526ms for pod "etcd-addons-760875" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:14.502566  498295 pod_ready.go:83] waiting for pod "kube-apiserver-addons-760875" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:14.506935  498295 pod_ready.go:94] pod "kube-apiserver-addons-760875" is "Ready"
	I1002 20:22:14.506955  498295 pod_ready.go:86] duration metric: took 4.369601ms for pod "kube-apiserver-addons-760875" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:14.508573  498295 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-760875" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:14.888375  498295 pod_ready.go:94] pod "kube-controller-manager-addons-760875" is "Ready"
	I1002 20:22:14.888404  498295 pod_ready.go:86] duration metric: took 379.811752ms for pod "kube-controller-manager-addons-760875" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:15.088056  498295 pod_ready.go:83] waiting for pod "kube-proxy-lghgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:15.487560  498295 pod_ready.go:94] pod "kube-proxy-lghgd" is "Ready"
	I1002 20:22:15.487587  498295 pod_ready.go:86] duration metric: took 399.507382ms for pod "kube-proxy-lghgd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:15.687633  498295 pod_ready.go:83] waiting for pod "kube-scheduler-addons-760875" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:16.088318  498295 pod_ready.go:94] pod "kube-scheduler-addons-760875" is "Ready"
	I1002 20:22:16.088345  498295 pod_ready.go:86] duration metric: took 400.687471ms for pod "kube-scheduler-addons-760875" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:22:16.088356  498295 pod_ready.go:40] duration metric: took 1.604892813s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:22:16.137648  498295 start.go:623] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1002 20:22:16.139287  498295 out.go:179] * Done! kubectl is now configured to use "addons-760875" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.148682232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759436726148604219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598014,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77c2f2d0-3007-4077-81e0-3279e54633de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.149540780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3881f1a-2e94-419b-95d1-2a1c22b392ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.149748787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3881f1a-2e94-419b-95d1-2a1c22b392ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.150098677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f96daba21d555f9cb20ed0d51766de7397e524b98ddfc42788242ed6f6319a,PodSandboxId:ea4b72cd820bf0993c46fb0b5f08d9e3e63fa31602682a7bc2ad05f3f0787a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759436583361109157,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a448908-fb28-4d4e-9861-f29c1b50e494,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29192530a83b3a4c7cd3db5bc4214c3b86aadf89768c68e31dbc30ea4e88cacc,PodSandboxId:425f5ffc4a6055406f2c6227a5916fe210ff3e5b57146b600f623d5b798e930f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759436540724831338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 313785c7-79b4-466d-af42-76afbf3a7fe5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733baa0e72021239d108a1b69c78b18c5f15a6b748bd3bbefdab7b6d530e4023,PodSandboxId:f555c6308e895d711e8e89ed11e63281d151a74519984f32a83e61ae3ab131ae,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759436529884821074,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-qzt45,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c1c15167-6e4e-4a3b-9e7d-fc13b53cae5c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1b2e647d7723f5ca480defa90602b04e59e233014638d35c155acd6d090b2ddf,PodSandboxId:f690e89fe7434667b86fc5160a8d8c8ed2a8527dd5aff81536a31165e10c90db,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759436449903818575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7h825,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f1e99f57-fbaa-4041-9aff-7d262ab7dbd3,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4082364a42bcc3d6c2d5f7af96209413081958a01170f04aad298de74f0492e,PodSandboxId:5a7c302aae5fa4924bfff4ab4f09d740c8e6e57da351d14982863695eca7b545,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759436449828413494,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7fscc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f3b052a-49a3-4751-b93d-0679d658d9ae,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6845032019caa198aacf313859f8462f1b221decba8e78ed8de91b242afc12a,PodSandboxId:dcfe50fa969d3aa09cb6c8262abb3040e35dc2ca3a996ab345af06b7c24d00a7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759436447998004233,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-wcg92,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c61b302c-3af6-4eca-b091-713734b931a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13029557d25d7bdb88828e1fe44fd799456860973a9c1f8d58ef6345230dc5ba,PodSandboxId:00d8afb7b12fbb0125ed0b8bfb552ea9779762327c3d19acbeea4b52e8300a67,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759436434489676351,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 544a1f20-492a-44ee-96d1-f6b8375a80d0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d637ac01cc76a25d714c5b12d37c19a345cec4ceb864e8ef318a6bd0c23ba5,PodSandboxId:776c13bad85f822e1f2a0d9a84d58ffcfb0b853140e8eae192a8fe3497ffa2ba,Metadata:&ContainerMetadata{Name:local-p
ath-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759436424447962782,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-htxhl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 39ca673b-5b75-4f90-916e-495bf4d2585f,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499d620247e2692023add51a668b50a593744c085fb8b4ccdf4f07d201fd63f5,PodSandboxId:536ed7ef8ed337c8bdc0f197246641d2dd94
9eb3bab410baa37ea15d4ba31c01,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759436404593012716,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6fptp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f251e5-b493-4a24-803f-74575247bd51,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d33f24df065bec40ba5398706276defd087f3fd362e04af6a7599c1a45dc2bf,PodSandbo
xId:2149a9f12d0df9d924c0d6263a49bd6f1410a8602e9622d08c78993e62468056,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759436384275447904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd84c2f-23ba-439f-8d70-e952ae36b801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f936842525ffc8fc6526af7b0e421539350414ed61c78ae3f7220a0b213c2e,PodSandboxId:0b2eed37
0f2ed3ad782a45d386bf7c21d631238341aab80a4eccb3faf718bcea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759436378345800411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t6k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f42852-82cf-4a0a-b8e2-c84b4392aff8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:020ac7d4e99a03be998c00c9e843c3116a4e1e415f1c4281712d607586e2d05f,PodSandboxId:e5d96f990175b3395aa0b8121ba538bac40984e2c88747ee8d06ed6189cf9c97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759436377953568863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lghgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f6e9f1-4463-4760-b9de-cdbe4720ab23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776f20e6d352c132d42279dc595915c26bed3175ae138925cfaeecd8a07d127a,PodSandboxId:c772e557ec70cac249bd6c249ee4f627daeb7526143ce6b17888e48a9e5a0c6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759436366533683742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdfb937dc8c33bc025adb9a3162aecc,},Annotations:map[string]string{io.kubernetes.container.hash: 9c
112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7786d8284ff2ad27a107e931cd1b970371f66403217e5bbc9dc2a9c562ef0e2,PodSandboxId:f8aa12bdddebe65021b1ce55f2b74e63312f5a75bee94f551792fbe381de2a21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759436366510645508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9099c661fa8d19684a578d1c9b664d62,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f0e2109ac66989d321eba4179cd12c78f217b738982e4f1e5db9642ffbe0a2,PodSandboxId:ba51c1c48a3a512a7cbd5e74f38d2c7fd92b82b166e5c195c4fe6fd569c7f863,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759436366481551154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586501c7e1bcfd6ff836d5d852943026,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c1c1943a90d99941f77f5375f128a5759783c3d33d33786f97fa5028402f91,PodSandboxId:11c5d193413a6fcad8fbb2f11fbf93e49a3247037f127b5ceb8362d4fea088ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedA
t:1759436366466578080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf1d0871e38cfe814010f9da7c537ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3881f1a-2e94-419b-95d1-2a1c22b392ab name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.191815962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2acf200-5a96-481f-8dfc-238510f1fd88 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.191890270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2acf200-5a96-481f-8dfc-238510f1fd88 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.193500658Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=371e9f29-6330-4a38-a3c2-766015cd04b5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.194707305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759436726194677176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598014,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=371e9f29-6330-4a38-a3c2-766015cd04b5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.195587552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbe65d3d-1b47-4343-9ce2-e1ae43d20baa name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.195677074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbe65d3d-1b47-4343-9ce2-e1ae43d20baa name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.196014050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f96daba21d555f9cb20ed0d51766de7397e524b98ddfc42788242ed6f6319a,PodSandboxId:ea4b72cd820bf0993c46fb0b5f08d9e3e63fa31602682a7bc2ad05f3f0787a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759436583361109157,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a448908-fb28-4d4e-9861-f29c1b50e494,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29192530a83b3a4c7cd3db5bc4214c3b86aadf89768c68e31dbc30ea4e88cacc,PodSandboxId:425f5ffc4a6055406f2c6227a5916fe210ff3e5b57146b600f623d5b798e930f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759436540724831338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 313785c7-79b4-466d-af42-76afbf3a7fe5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733baa0e72021239d108a1b69c78b18c5f15a6b748bd3bbefdab7b6d530e4023,PodSandboxId:f555c6308e895d711e8e89ed11e63281d151a74519984f32a83e61ae3ab131ae,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759436529884821074,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-qzt45,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c1c15167-6e4e-4a3b-9e7d-fc13b53cae5c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1b2e647d7723f5ca480defa90602b04e59e233014638d35c155acd6d090b2ddf,PodSandboxId:f690e89fe7434667b86fc5160a8d8c8ed2a8527dd5aff81536a31165e10c90db,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759436449903818575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7h825,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f1e99f57-fbaa-4041-9aff-7d262ab7dbd3,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4082364a42bcc3d6c2d5f7af96209413081958a01170f04aad298de74f0492e,PodSandboxId:5a7c302aae5fa4924bfff4ab4f09d740c8e6e57da351d14982863695eca7b545,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759436449828413494,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7fscc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f3b052a-49a3-4751-b93d-0679d658d9ae,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6845032019caa198aacf313859f8462f1b221decba8e78ed8de91b242afc12a,PodSandboxId:dcfe50fa969d3aa09cb6c8262abb3040e35dc2ca3a996ab345af06b7c24d00a7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759436447998004233,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-wcg92,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c61b302c-3af6-4eca-b091-713734b931a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13029557d25d7bdb88828e1fe44fd799456860973a9c1f8d58ef6345230dc5ba,PodSandboxId:00d8afb7b12fbb0125ed0b8bfb552ea9779762327c3d19acbeea4b52e8300a67,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759436434489676351,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 544a1f20-492a-44ee-96d1-f6b8375a80d0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d637ac01cc76a25d714c5b12d37c19a345cec4ceb864e8ef318a6bd0c23ba5,PodSandboxId:776c13bad85f822e1f2a0d9a84d58ffcfb0b853140e8eae192a8fe3497ffa2ba,Metadata:&ContainerMetadata{Name:local-p
ath-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759436424447962782,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-htxhl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 39ca673b-5b75-4f90-916e-495bf4d2585f,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499d620247e2692023add51a668b50a593744c085fb8b4ccdf4f07d201fd63f5,PodSandboxId:536ed7ef8ed337c8bdc0f197246641d2dd94
9eb3bab410baa37ea15d4ba31c01,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759436404593012716,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6fptp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f251e5-b493-4a24-803f-74575247bd51,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d33f24df065bec40ba5398706276defd087f3fd362e04af6a7599c1a45dc2bf,PodSandbo
xId:2149a9f12d0df9d924c0d6263a49bd6f1410a8602e9622d08c78993e62468056,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759436384275447904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd84c2f-23ba-439f-8d70-e952ae36b801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f936842525ffc8fc6526af7b0e421539350414ed61c78ae3f7220a0b213c2e,PodSandboxId:0b2eed37
0f2ed3ad782a45d386bf7c21d631238341aab80a4eccb3faf718bcea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759436378345800411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t6k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f42852-82cf-4a0a-b8e2-c84b4392aff8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:020ac7d4e99a03be998c00c9e843c3116a4e1e415f1c4281712d607586e2d05f,PodSandboxId:e5d96f990175b3395aa0b8121ba538bac40984e2c88747ee8d06ed6189cf9c97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759436377953568863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lghgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f6e9f1-4463-4760-b9de-cdbe4720ab23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776f20e6d352c132d42279dc595915c26bed3175ae138925cfaeecd8a07d127a,PodSandboxId:c772e557ec70cac249bd6c249ee4f627daeb7526143ce6b17888e48a9e5a0c6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759436366533683742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdfb937dc8c33bc025adb9a3162aecc,},Annotations:map[string]string{io.kubernetes.container.hash: 9c
112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7786d8284ff2ad27a107e931cd1b970371f66403217e5bbc9dc2a9c562ef0e2,PodSandboxId:f8aa12bdddebe65021b1ce55f2b74e63312f5a75bee94f551792fbe381de2a21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759436366510645508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9099c661fa8d19684a578d1c9b664d62,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f0e2109ac66989d321eba4179cd12c78f217b738982e4f1e5db9642ffbe0a2,PodSandboxId:ba51c1c48a3a512a7cbd5e74f38d2c7fd92b82b166e5c195c4fe6fd569c7f863,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759436366481551154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586501c7e1bcfd6ff836d5d852943026,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c1c1943a90d99941f77f5375f128a5759783c3d33d33786f97fa5028402f91,PodSandboxId:11c5d193413a6fcad8fbb2f11fbf93e49a3247037f127b5ceb8362d4fea088ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedA
t:1759436366466578080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf1d0871e38cfe814010f9da7c537ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbe65d3d-1b47-4343-9ce2-e1ae43d20baa name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.231568583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=125e4ef1-33c5-4331-ab2d-6f6b32a89a02 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.231831176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=125e4ef1-33c5-4331-ab2d-6f6b32a89a02 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.233170553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf9b2b29-a4c9-47ea-9dac-ecf71a27c9be name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.234548738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759436726234519232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598014,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf9b2b29-a4c9-47ea-9dac-ecf71a27c9be name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.235186975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=033d789d-b03d-4326-837f-57ffda67f830 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.235322374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=033d789d-b03d-4326-837f-57ffda67f830 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.235661014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f96daba21d555f9cb20ed0d51766de7397e524b98ddfc42788242ed6f6319a,PodSandboxId:ea4b72cd820bf0993c46fb0b5f08d9e3e63fa31602682a7bc2ad05f3f0787a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759436583361109157,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a448908-fb28-4d4e-9861-f29c1b50e494,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29192530a83b3a4c7cd3db5bc4214c3b86aadf89768c68e31dbc30ea4e88cacc,PodSandboxId:425f5ffc4a6055406f2c6227a5916fe210ff3e5b57146b600f623d5b798e930f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759436540724831338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 313785c7-79b4-466d-af42-76afbf3a7fe5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733baa0e72021239d108a1b69c78b18c5f15a6b748bd3bbefdab7b6d530e4023,PodSandboxId:f555c6308e895d711e8e89ed11e63281d151a74519984f32a83e61ae3ab131ae,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759436529884821074,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-qzt45,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c1c15167-6e4e-4a3b-9e7d-fc13b53cae5c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1b2e647d7723f5ca480defa90602b04e59e233014638d35c155acd6d090b2ddf,PodSandboxId:f690e89fe7434667b86fc5160a8d8c8ed2a8527dd5aff81536a31165e10c90db,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759436449903818575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7h825,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f1e99f57-fbaa-4041-9aff-7d262ab7dbd3,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4082364a42bcc3d6c2d5f7af96209413081958a01170f04aad298de74f0492e,PodSandboxId:5a7c302aae5fa4924bfff4ab4f09d740c8e6e57da351d14982863695eca7b545,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759436449828413494,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7fscc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f3b052a-49a3-4751-b93d-0679d658d9ae,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6845032019caa198aacf313859f8462f1b221decba8e78ed8de91b242afc12a,PodSandboxId:dcfe50fa969d3aa09cb6c8262abb3040e35dc2ca3a996ab345af06b7c24d00a7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759436447998004233,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-wcg92,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c61b302c-3af6-4eca-b091-713734b931a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13029557d25d7bdb88828e1fe44fd799456860973a9c1f8d58ef6345230dc5ba,PodSandboxId:00d8afb7b12fbb0125ed0b8bfb552ea9779762327c3d19acbeea4b52e8300a67,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759436434489676351,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 544a1f20-492a-44ee-96d1-f6b8375a80d0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d637ac01cc76a25d714c5b12d37c19a345cec4ceb864e8ef318a6bd0c23ba5,PodSandboxId:776c13bad85f822e1f2a0d9a84d58ffcfb0b853140e8eae192a8fe3497ffa2ba,Metadata:&ContainerMetadata{Name:local-p
ath-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759436424447962782,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-htxhl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 39ca673b-5b75-4f90-916e-495bf4d2585f,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499d620247e2692023add51a668b50a593744c085fb8b4ccdf4f07d201fd63f5,PodSandboxId:536ed7ef8ed337c8bdc0f197246641d2dd94
9eb3bab410baa37ea15d4ba31c01,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759436404593012716,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6fptp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f251e5-b493-4a24-803f-74575247bd51,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d33f24df065bec40ba5398706276defd087f3fd362e04af6a7599c1a45dc2bf,PodSandbo
xId:2149a9f12d0df9d924c0d6263a49bd6f1410a8602e9622d08c78993e62468056,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759436384275447904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd84c2f-23ba-439f-8d70-e952ae36b801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f936842525ffc8fc6526af7b0e421539350414ed61c78ae3f7220a0b213c2e,PodSandboxId:0b2eed37
0f2ed3ad782a45d386bf7c21d631238341aab80a4eccb3faf718bcea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759436378345800411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t6k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f42852-82cf-4a0a-b8e2-c84b4392aff8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:020ac7d4e99a03be998c00c9e843c3116a4e1e415f1c4281712d607586e2d05f,PodSandboxId:e5d96f990175b3395aa0b8121ba538bac40984e2c88747ee8d06ed6189cf9c97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759436377953568863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lghgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f6e9f1-4463-4760-b9de-cdbe4720ab23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776f20e6d352c132d42279dc595915c26bed3175ae138925cfaeecd8a07d127a,PodSandboxId:c772e557ec70cac249bd6c249ee4f627daeb7526143ce6b17888e48a9e5a0c6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759436366533683742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdfb937dc8c33bc025adb9a3162aecc,},Annotations:map[string]string{io.kubernetes.container.hash: 9c
112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7786d8284ff2ad27a107e931cd1b970371f66403217e5bbc9dc2a9c562ef0e2,PodSandboxId:f8aa12bdddebe65021b1ce55f2b74e63312f5a75bee94f551792fbe381de2a21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759436366510645508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9099c661fa8d19684a578d1c9b664d62,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f0e2109ac66989d321eba4179cd12c78f217b738982e4f1e5db9642ffbe0a2,PodSandboxId:ba51c1c48a3a512a7cbd5e74f38d2c7fd92b82b166e5c195c4fe6fd569c7f863,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759436366481551154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586501c7e1bcfd6ff836d5d852943026,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c1c1943a90d99941f77f5375f128a5759783c3d33d33786f97fa5028402f91,PodSandboxId:11c5d193413a6fcad8fbb2f11fbf93e49a3247037f127b5ceb8362d4fea088ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedA
t:1759436366466578080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf1d0871e38cfe814010f9da7c537ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=033d789d-b03d-4326-837f-57ffda67f830 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.270961109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8126e2b1-39f3-4185-8454-3ca9a610e23b name=/runtime.v1.RuntimeService/Version
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.271052572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8126e2b1-39f3-4185-8454-3ca9a610e23b name=/runtime.v1.RuntimeService/Version
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.272897956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fff4a82d-b289-4545-ac6c-7853ec6717c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.274457042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759436726274430347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598014,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fff4a82d-b289-4545-ac6c-7853ec6717c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.274982921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56e72455-7142-414b-8e4e-f5c8e9a52c0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.275161003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56e72455-7142-414b-8e4e-f5c8e9a52c0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:25:26 addons-760875 crio[810]: time="2025-10-02 20:25:26.275708647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60f96daba21d555f9cb20ed0d51766de7397e524b98ddfc42788242ed6f6319a,PodSandboxId:ea4b72cd820bf0993c46fb0b5f08d9e3e63fa31602682a7bc2ad05f3f0787a5d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759436583361109157,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a448908-fb28-4d4e-9861-f29c1b50e494,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29192530a83b3a4c7cd3db5bc4214c3b86aadf89768c68e31dbc30ea4e88cacc,PodSandboxId:425f5ffc4a6055406f2c6227a5916fe210ff3e5b57146b600f623d5b798e930f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759436540724831338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 313785c7-79b4-466d-af42-76afbf3a7fe5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:733baa0e72021239d108a1b69c78b18c5f15a6b748bd3bbefdab7b6d530e4023,PodSandboxId:f555c6308e895d711e8e89ed11e63281d151a74519984f32a83e61ae3ab131ae,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759436529884821074,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-qzt45,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c1c15167-6e4e-4a3b-9e7d-fc13b53cae5c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1b2e647d7723f5ca480defa90602b04e59e233014638d35c155acd6d090b2ddf,PodSandboxId:f690e89fe7434667b86fc5160a8d8c8ed2a8527dd5aff81536a31165e10c90db,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1759436449903818575,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7h825,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f1e99f57-fbaa-4041-9aff-7d262ab7dbd3,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4082364a42bcc3d6c2d5f7af96209413081958a01170f04aad298de74f0492e,PodSandboxId:5a7c302aae5fa4924bfff4ab4f09d740c8e6e57da351d14982863695eca7b545,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759436449828413494,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7fscc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f3b052a-49a3-4751-b93d-0679d658d9ae,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6845032019caa198aacf313859f8462f1b221decba8e78ed8de91b242afc12a,PodSandboxId:dcfe50fa969d3aa09cb6c8262abb3040e35dc2ca3a996ab345af06b7c24d00a7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759436447998004233,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-wcg92,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c61b302c-3af6-4eca-b091-713734b931a5,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13029557d25d7bdb88828e1fe44fd799456860973a9c1f8d58ef6345230dc5ba,PodSandboxId:00d8afb7b12fbb0125ed0b8bfb552ea9779762327c3d19acbeea4b52e8300a67,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759436434489676351,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 544a1f20-492a-44ee-96d1-f6b8375a80d0,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d637ac01cc76a25d714c5b12d37c19a345cec4ceb864e8ef318a6bd0c23ba5,PodSandboxId:776c13bad85f822e1f2a0d9a84d58ffcfb0b853140e8eae192a8fe3497ffa2ba,Metadata:&ContainerMetadata{Name:local-p
ath-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1759436424447962782,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-htxhl,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 39ca673b-5b75-4f90-916e-495bf4d2585f,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:499d620247e2692023add51a668b50a593744c085fb8b4ccdf4f07d201fd63f5,PodSandboxId:536ed7ef8ed337c8bdc0f197246641d2dd94
9eb3bab410baa37ea15d4ba31c01,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759436404593012716,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-6fptp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58f251e5-b493-4a24-803f-74575247bd51,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d33f24df065bec40ba5398706276defd087f3fd362e04af6a7599c1a45dc2bf,PodSandbo
xId:2149a9f12d0df9d924c0d6263a49bd6f1410a8602e9622d08c78993e62468056,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759436384275447904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcd84c2f-23ba-439f-8d70-e952ae36b801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f936842525ffc8fc6526af7b0e421539350414ed61c78ae3f7220a0b213c2e,PodSandboxId:0b2eed37
0f2ed3ad782a45d386bf7c21d631238341aab80a4eccb3faf718bcea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759436378345800411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t6k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2f42852-82cf-4a0a-b8e2-c84b4392aff8,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:020ac7d4e99a03be998c00c9e843c3116a4e1e415f1c4281712d607586e2d05f,PodSandboxId:e5d96f990175b3395aa0b8121ba538bac40984e2c88747ee8d06ed6189cf9c97,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759436377953568863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lghgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17f6e9f1-4463-4760-b9de-cdbe4720ab23,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:776f20e6d352c132d42279dc595915c26bed3175ae138925cfaeecd8a07d127a,PodSandboxId:c772e557ec70cac249bd6c249ee4f627daeb7526143ce6b17888e48a9e5a0c6e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759436366533683742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdfb937dc8c33bc025adb9a3162aecc,},Annotations:map[string]string{io.kubernetes.container.hash: 9c
112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7786d8284ff2ad27a107e931cd1b970371f66403217e5bbc9dc2a9c562ef0e2,PodSandboxId:f8aa12bdddebe65021b1ce55f2b74e63312f5a75bee94f551792fbe381de2a21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759436366510645508,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9099c661fa8d19684a578d1c9b664d62,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57f0e2109ac66989d321eba4179cd12c78f217b738982e4f1e5db9642ffbe0a2,PodSandboxId:ba51c1c48a3a512a7cbd5e74f38d2c7fd92b82b166e5c195c4fe6fd569c7f863,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759436366481551154,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 586501c7e1bcfd6ff836d5d852943026,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c1c1943a90d99941f77f5375f128a5759783c3d33d33786f97fa5028402f91,PodSandboxId:11c5d193413a6fcad8fbb2f11fbf93e49a3247037f127b5ceb8362d4fea088ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedA
t:1759436366466578080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-760875,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bf1d0871e38cfe814010f9da7c537ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56e72455-7142-414b-8e4e-f5c8e9a52c0d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60f96daba21d5       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   ea4b72cd820bf       nginx
	29192530a83b3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   425f5ffc4a605       busybox
	733baa0e72021       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   f555c6308e895       ingress-nginx-controller-9cc49f96f-qzt45
	1b2e647d7723f       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             4 minutes ago       Exited              patch                     1                   f690e89fe7434       ingress-nginx-admission-patch-7h825
	f4082364a42bc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   5a7c302aae5fa       ingress-nginx-admission-create-7fscc
	b6845032019ca       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago       Running             gadget                    0                   dcfe50fa969d3       gadget-wcg92
	13029557d25d7       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   00d8afb7b12fb       kube-ingress-dns-minikube
	f4d637ac01cc7       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             5 minutes ago       Running             local-path-provisioner    0                   776c13bad85f8       local-path-provisioner-648f6765c9-htxhl
	499d620247e26       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   536ed7ef8ed33       amd-gpu-device-plugin-6fptp
	1d33f24df065b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   2149a9f12d0df       storage-provisioner
	23f936842525f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   0b2eed370f2ed       coredns-66bc5c9577-t6k2m
	020ac7d4e99a0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   e5d96f990175b       kube-proxy-lghgd
	776f20e6d352c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   c772e557ec70c       kube-controller-manager-addons-760875
	f7786d8284ff2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   f8aa12bdddebe       kube-scheduler-addons-760875
	57f0e2109ac66       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   ba51c1c48a3a5       kube-apiserver-addons-760875
	b1c1c1943a90d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   11c5d193413a6       etcd-addons-760875
	
	
	==> coredns [23f936842525ffc8fc6526af7b0e421539350414ed61c78ae3f7220a0b213c2e] <==
	[INFO] 10.244.0.8:35954 - 47355 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000565793s
	[INFO] 10.244.0.8:35954 - 11746 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000093337s
	[INFO] 10.244.0.8:35954 - 8783 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000093249s
	[INFO] 10.244.0.8:35954 - 57808 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000200558s
	[INFO] 10.244.0.8:35954 - 7860 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00025817s
	[INFO] 10.244.0.8:35954 - 37327 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000133186s
	[INFO] 10.244.0.8:35954 - 8701 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000242639s
	[INFO] 10.244.0.8:44868 - 56940 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100032s
	[INFO] 10.244.0.8:44868 - 57245 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000114276s
	[INFO] 10.244.0.8:45262 - 20976 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000187765s
	[INFO] 10.244.0.8:45262 - 20494 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000234488s
	[INFO] 10.244.0.8:45918 - 700 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148768s
	[INFO] 10.244.0.8:45918 - 965 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118785s
	[INFO] 10.244.0.8:43298 - 61542 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000088706s
	[INFO] 10.244.0.8:43298 - 61720 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128104s
	[INFO] 10.244.0.23:48354 - 17870 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000784091s
	[INFO] 10.244.0.23:54795 - 49889 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000457523s
	[INFO] 10.244.0.23:57240 - 55829 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009446s
	[INFO] 10.244.0.23:42651 - 54432 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000085484s
	[INFO] 10.244.0.23:34557 - 44214 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000176866s
	[INFO] 10.244.0.23:58936 - 15828 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000267145s
	[INFO] 10.244.0.23:52162 - 58865 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001061169s
	[INFO] 10.244.0.23:57889 - 21629 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004036635s
	[INFO] 10.244.0.27:33953 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000261176s
	[INFO] 10.244.0.27:42212 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096512s
	
	
	==> describe nodes <==
	Name:               addons-760875
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-760875
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=addons-760875
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_19_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-760875
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:19:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-760875
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:25:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:23:37 +0000   Thu, 02 Oct 2025 20:19:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:23:37 +0000   Thu, 02 Oct 2025 20:19:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:23:37 +0000   Thu, 02 Oct 2025 20:19:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:23:37 +0000   Thu, 02 Oct 2025 20:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    addons-760875
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 1cfe69ebe8cf4278a015d4098f2f3935
	  System UUID:                1cfe69eb-e8cf-4278-a015-d4098f2f3935
	  Boot ID:                    89918264-aaf5-482c-9824-97672a573668
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     hello-world-app-5d498dc89-l85k9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-wcg92                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-qzt45    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m42s
	  kube-system                 amd-gpu-device-plugin-6fptp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 coredns-66bc5c9577-t6k2m                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m49s
	  kube-system                 etcd-addons-760875                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m56s
	  kube-system                 kube-apiserver-addons-760875                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 kube-controller-manager-addons-760875       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-proxy-lghgd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 kube-scheduler-addons-760875                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  local-path-storage          local-path-provisioner-648f6765c9-htxhl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m47s  kube-proxy       
	  Normal  Starting                 5m55s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m55s  kubelet          Node addons-760875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s  kubelet          Node addons-760875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s  kubelet          Node addons-760875 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m54s  kubelet          Node addons-760875 status is now: NodeReady
	  Normal  RegisteredNode           5m51s  node-controller  Node addons-760875 event: Registered Node addons-760875 in Controller
	
	
	==> dmesg <==
	[  +0.631199] kauditd_printk_skb: 369 callbacks suppressed
	[  +0.978383] kauditd_printk_skb: 452 callbacks suppressed
	[Oct 2 20:20] kauditd_printk_skb: 167 callbacks suppressed
	[ +11.912241] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.678498] kauditd_printk_skb: 44 callbacks suppressed
	[ +11.715783] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.692146] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.824432] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.097491] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.079323] kauditd_printk_skb: 80 callbacks suppressed
	[Oct 2 20:21] kauditd_printk_skb: 60 callbacks suppressed
	[Oct 2 20:22] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000030] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.560510] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.652114] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.403375] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.871650] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.735097] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.286152] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.957551] kauditd_printk_skb: 90 callbacks suppressed
	[  +1.075751] kauditd_printk_skb: 136 callbacks suppressed
	[Oct 2 20:23] kauditd_printk_skb: 99 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 108 callbacks suppressed
	[  +7.538339] kauditd_printk_skb: 112 callbacks suppressed
	[Oct 2 20:25] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [b1c1c1943a90d99941f77f5375f128a5759783c3d33d33786f97fa5028402f91] <==
	{"level":"warn","ts":"2025-10-02T20:22:09.736770Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"210.85959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T20:22:09.736784Z","caller":"traceutil/trace.go:172","msg":"trace[766789162] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1230; }","duration":"210.870368ms","start":"2025-10-02T20:22:09.525908Z","end":"2025-10-02T20:22:09.736778Z","steps":["trace[766789162] 'agreement among raft nodes before linearized reading'  (duration: 210.851985ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T20:22:09.736841Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T20:22:09.321019Z","time spent":"415.758369ms","remote":"127.0.0.1:43314","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1224 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2025-10-02T20:22:09.736979Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"218.206776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T20:22:09.736999Z","caller":"traceutil/trace.go:172","msg":"trace[85010246] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1230; }","duration":"218.225485ms","start":"2025-10-02T20:22:09.518767Z","end":"2025-10-02T20:22:09.736992Z","steps":["trace[85010246] 'agreement among raft nodes before linearized reading'  (duration: 218.193476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T20:22:09.736717Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"295.408731ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T20:22:09.737064Z","caller":"traceutil/trace.go:172","msg":"trace[2120695543] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1230; }","duration":"295.776281ms","start":"2025-10-02T20:22:09.441280Z","end":"2025-10-02T20:22:09.737057Z","steps":["trace[2120695543] 'agreement among raft nodes before linearized reading'  (duration: 295.382384ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T20:22:09.737120Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"231.245402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-02T20:22:09.737134Z","caller":"traceutil/trace.go:172","msg":"trace[2009727639] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1230; }","duration":"231.26122ms","start":"2025-10-02T20:22:09.505869Z","end":"2025-10-02T20:22:09.737130Z","steps":["trace[2009727639] 'agreement among raft nodes before linearized reading'  (duration: 231.198063ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T20:22:09.736746Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"273.294172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T20:22:09.737191Z","caller":"traceutil/trace.go:172","msg":"trace[910402049] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1230; }","duration":"273.73721ms","start":"2025-10-02T20:22:09.463447Z","end":"2025-10-02T20:22:09.737184Z","steps":["trace[910402049] 'agreement among raft nodes before linearized reading'  (duration: 273.287617ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T20:22:09.737213Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.742004ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T20:22:09.737276Z","caller":"traceutil/trace.go:172","msg":"trace[1725582141] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1230; }","duration":"251.753713ms","start":"2025-10-02T20:22:09.485467Z","end":"2025-10-02T20:22:09.737221Z","steps":["trace[1725582141] 'agreement among raft nodes before linearized reading'  (duration: 251.733924ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T20:22:42.026533Z","caller":"traceutil/trace.go:172","msg":"trace[2014253368] linearizableReadLoop","detail":"{readStateIndex:1503; appliedIndex:1503; }","duration":"119.635976ms","start":"2025-10-02T20:22:41.906871Z","end":"2025-10-02T20:22:42.026507Z","steps":["trace[2014253368] 'read index received'  (duration: 119.630205ms)","trace[2014253368] 'applied index is now lower than readState.Index'  (duration: 4.617µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T20:22:42.028442Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.586704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-02T20:22:42.028504Z","caller":"traceutil/trace.go:172","msg":"trace[106906763] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1441; }","duration":"121.662716ms","start":"2025-10-02T20:22:41.906832Z","end":"2025-10-02T20:22:42.028495Z","steps":["trace[106906763] 'agreement among raft nodes before linearized reading'  (duration: 120.794262ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T20:22:42.028939Z","caller":"traceutil/trace.go:172","msg":"trace[1733570192] transaction","detail":"{read_only:false; response_revision:1442; number_of_response:1; }","duration":"216.737281ms","start":"2025-10-02T20:22:41.812190Z","end":"2025-10-02T20:22:42.028927Z","steps":["trace[1733570192] 'process raft request'  (duration: 215.430095ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T20:22:47.548845Z","caller":"traceutil/trace.go:172","msg":"trace[1021858553] transaction","detail":"{read_only:false; response_revision:1483; number_of_response:1; }","duration":"152.852904ms","start":"2025-10-02T20:22:47.395980Z","end":"2025-10-02T20:22:47.548833Z","steps":["trace[1021858553] 'process raft request'  (duration: 151.826532ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T20:22:48.847859Z","caller":"traceutil/trace.go:172","msg":"trace[1482630921] transaction","detail":"{read_only:false; response_revision:1485; number_of_response:1; }","duration":"280.875775ms","start":"2025-10-02T20:22:48.566970Z","end":"2025-10-02T20:22:48.847846Z","steps":["trace[1482630921] 'process raft request'  (duration: 280.053949ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T20:23:14.728015Z","caller":"traceutil/trace.go:172","msg":"trace[1808111594] linearizableReadLoop","detail":"{readStateIndex:1814; appliedIndex:1814; }","duration":"242.321298ms","start":"2025-10-02T20:23:14.485678Z","end":"2025-10-02T20:23:14.728000Z","steps":["trace[1808111594] 'read index received'  (duration: 242.315919ms)","trace[1808111594] 'applied index is now lower than readState.Index'  (duration: 3.616µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T20:23:14.728112Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"242.419455ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T20:23:14.728146Z","caller":"traceutil/trace.go:172","msg":"trace[1943397223] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1738; }","duration":"242.449804ms","start":"2025-10-02T20:23:14.485674Z","end":"2025-10-02T20:23:14.728124Z","steps":["trace[1943397223] 'agreement among raft nodes before linearized reading'  (duration: 242.402078ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T20:23:14.728179Z","caller":"traceutil/trace.go:172","msg":"trace[1178680280] transaction","detail":"{read_only:false; response_revision:1739; number_of_response:1; }","duration":"243.300839ms","start":"2025-10-02T20:23:14.484868Z","end":"2025-10-02T20:23:14.728168Z","steps":["trace[1178680280] 'process raft request'  (duration: 243.167821ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T20:23:14.728580Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.872451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T20:23:14.728703Z","caller":"traceutil/trace.go:172","msg":"trace[254070445] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1739; }","duration":"144.999519ms","start":"2025-10-02T20:23:14.583696Z","end":"2025-10-02T20:23:14.728695Z","steps":["trace[254070445] 'agreement among raft nodes before linearized reading'  (duration: 144.836489ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:25:26 up 6 min,  0 users,  load average: 0.40, 0.98, 0.56
	Linux addons-760875 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [57f0e2109ac66989d321eba4179cd12c78f217b738982e4f1e5db9642ffbe0a2] <==
	 > logger="UnhandledError"
	E1002 20:20:17.969675       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.151.203:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.151.203:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.151.203:443: connect: connection refused" logger="UnhandledError"
	E1002 20:20:17.973947       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.151.203:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.151.203:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.151.203:443: connect: connection refused" logger="UnhandledError"
	I1002 20:20:18.035526       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 20:22:26.949971       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:34216: use of closed network connection
	E1002 20:22:27.126983       1 conn.go:339] Error on socket receive: read tcp 192.168.39.220:8443->192.168.39.1:34232: use of closed network connection
	I1002 20:22:36.255698       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.147.100"}
	I1002 20:22:55.866320       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1002 20:22:58.485869       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 20:22:58.673007       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.153.61"}
	I1002 20:23:18.996416       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1002 20:23:19.173626       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:23:19.173673       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 20:23:19.250898       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:23:19.251156       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 20:23:19.296548       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:23:19.296651       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 20:23:19.307272       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:23:19.307301       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 20:23:19.353171       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 20:23:19.353196       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 20:23:20.307667       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1002 20:23:20.354837       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1002 20:23:20.463142       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1002 20:25:25.056420       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.246.47"}
	
	
	==> kube-controller-manager [776f20e6d352c132d42279dc595915c26bed3175ae138925cfaeecd8a07d127a] <==
	E1002 20:23:30.439820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:23:35.019206       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:23:35.020268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1002 20:23:36.109892       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 20:23:36.109932       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:23:36.128183       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 20:23:36.128268       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 20:23:38.750310       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:23:38.751391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:23:38.968562       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:23:38.970159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:23:50.400915       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:23:50.402089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:23:51.903291       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:23:51.904131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:23:58.261631       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:23:58.262863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:24:16.736647       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:24:16.737650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:24:37.721124       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:24:37.722140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:24:44.079517       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:24:44.080520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 20:24:53.666710       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 20:24:53.667725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [020ac7d4e99a03be998c00c9e843c3116a4e1e415f1c4281712d607586e2d05f] <==
	I1002 20:19:38.778502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:19:38.879876       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:19:38.879915       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.220"]
	E1002 20:19:38.879979       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:19:39.124428       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 20:19:39.124477       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 20:19:39.124523       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:19:39.158551       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:19:39.161387       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:19:39.161844       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:19:39.168985       1 config.go:200] "Starting service config controller"
	I1002 20:19:39.173834       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:19:39.173895       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:19:39.173916       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:19:39.173930       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:19:39.173934       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:19:39.185500       1 config.go:309] "Starting node config controller"
	I1002 20:19:39.185531       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:19:39.185538       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:19:39.274026       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:19:39.274085       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:19:39.274039       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f7786d8284ff2ad27a107e931cd1b970371f66403217e5bbc9dc2a9c562ef0e2] <==
	E1002 20:19:29.027292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:19:29.027477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:19:29.027614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:19:29.029045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:19:29.029221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:19:29.029722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 20:19:29.031791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 20:19:29.872820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:19:29.883474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 20:19:29.897332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:19:29.912936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:19:29.919163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:19:29.926161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:19:30.067639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:19:30.100598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:19:30.101911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:19:30.110960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:19:30.165672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:19:30.283392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 20:19:30.292925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:19:30.336327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:19:30.366046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:19:30.391840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:19:30.410282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1002 20:19:31.909892       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:23:42 addons-760875 kubelet[1505]: E1002 20:23:42.051093    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436622050665174  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:23:51 addons-760875 kubelet[1505]: I1002 20:23:51.852850    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:23:52 addons-760875 kubelet[1505]: E1002 20:23:52.053710    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436632053195924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:23:52 addons-760875 kubelet[1505]: E1002 20:23:52.053734    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436632053195924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:02 addons-760875 kubelet[1505]: E1002 20:24:02.056519    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436642056043230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:02 addons-760875 kubelet[1505]: E1002 20:24:02.056546    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436642056043230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:12 addons-760875 kubelet[1505]: E1002 20:24:12.060001    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436652059592089  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:12 addons-760875 kubelet[1505]: E1002 20:24:12.060052    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436652059592089  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:22 addons-760875 kubelet[1505]: E1002 20:24:22.063517    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436662062937826  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:22 addons-760875 kubelet[1505]: E1002 20:24:22.063597    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436662062937826  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:32 addons-760875 kubelet[1505]: E1002 20:24:32.066621    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436672066060869  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:32 addons-760875 kubelet[1505]: E1002 20:24:32.066671    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436672066060869  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:39 addons-760875 kubelet[1505]: I1002 20:24:39.863944    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6fptp" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:24:42 addons-760875 kubelet[1505]: E1002 20:24:42.069771    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436682069359617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:42 addons-760875 kubelet[1505]: E1002 20:24:42.069793    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436682069359617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:52 addons-760875 kubelet[1505]: E1002 20:24:52.072793    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436692072281966  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:52 addons-760875 kubelet[1505]: E1002 20:24:52.072837    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436692072281966  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:24:57 addons-760875 kubelet[1505]: I1002 20:24:57.852786    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 20:25:02 addons-760875 kubelet[1505]: E1002 20:25:02.077386    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436702076130027  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:25:02 addons-760875 kubelet[1505]: E1002 20:25:02.077447    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436702076130027  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:25:12 addons-760875 kubelet[1505]: E1002 20:25:12.079465    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436712079094388  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:25:12 addons-760875 kubelet[1505]: E1002 20:25:12.079490    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436712079094388  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:25:22 addons-760875 kubelet[1505]: E1002 20:25:22.082390    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759436722081964924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:25:22 addons-760875 kubelet[1505]: E1002 20:25:22.082415    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759436722081964924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598014}  inodes_used:{value:201}}"
	Oct 02 20:25:25 addons-760875 kubelet[1505]: I1002 20:25:25.066838    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22kbw\" (UniqueName: \"kubernetes.io/projected/8d369a65-62a6-4804-b061-9220585b457c-kube-api-access-22kbw\") pod \"hello-world-app-5d498dc89-l85k9\" (UID: \"8d369a65-62a6-4804-b061-9220585b457c\") " pod="default/hello-world-app-5d498dc89-l85k9"
	
	
	==> storage-provisioner [1d33f24df065bec40ba5398706276defd087f3fd362e04af6a7599c1a45dc2bf] <==
	W1002 20:25:01.499220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:03.502906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:03.508391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:05.511638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:05.515692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:07.519094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:07.526137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:09.530877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:09.537174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:11.539718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:11.547582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:13.550507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:13.554902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:15.557602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:15.562180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:17.565432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:17.569932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:19.573961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:19.578903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:21.581665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:21.586439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:23.589623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:23.594057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:25.598070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 20:25:25.603198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-760875 -n addons-760875
helpers_test.go:269: (dbg) Run:  kubectl --context addons-760875 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-l85k9 ingress-nginx-admission-create-7fscc ingress-nginx-admission-patch-7h825
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-760875 describe pod hello-world-app-5d498dc89-l85k9 ingress-nginx-admission-create-7fscc ingress-nginx-admission-patch-7h825
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-760875 describe pod hello-world-app-5d498dc89-l85k9 ingress-nginx-admission-create-7fscc ingress-nginx-admission-patch-7h825: exit status 1 (70.839952ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-l85k9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-760875/192.168.39.220
	Start Time:       Thu, 02 Oct 2025 20:25:24 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-22kbw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-22kbw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-l85k9 to addons-760875
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7fscc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7h825" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-760875 describe pod hello-world-app-5d498dc89-l85k9 ingress-nginx-admission-create-7fscc ingress-nginx-admission-patch-7h825: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-760875 addons disable ingress-dns --alsologtostderr -v=1: (1.699305908s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-760875 addons disable ingress --alsologtostderr -v=1: (7.725300625s)
--- FAIL: TestAddons/parallel/Ingress (158.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 image ls --format short --alsologtostderr: (2.24698442s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-175435 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-175435 image ls --format short --alsologtostderr:
I1002 20:30:31.197633  506384 out.go:360] Setting OutFile to fd 1 ...
I1002 20:30:31.197939  506384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:31.197949  506384 out.go:374] Setting ErrFile to fd 2...
I1002 20:30:31.197954  506384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:31.198193  506384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
I1002 20:30:31.198822  506384 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:31.198915  506384 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:31.199293  506384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:31.199374  506384 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:31.213338  506384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
I1002 20:30:31.213843  506384 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:31.214509  506384 main.go:141] libmachine: Using API Version  1
I1002 20:30:31.214548  506384 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:31.214962  506384 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:31.215199  506384 main.go:141] libmachine: (functional-175435) Calling .GetState
I1002 20:30:31.217154  506384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:31.217211  506384 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:31.230822  506384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37865
I1002 20:30:31.231292  506384 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:31.231825  506384 main.go:141] libmachine: Using API Version  1
I1002 20:30:31.231854  506384 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:31.232197  506384 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:31.232476  506384 main.go:141] libmachine: (functional-175435) Calling .DriverName
I1002 20:30:31.232699  506384 ssh_runner.go:195] Run: systemctl --version
I1002 20:30:31.232742  506384 main.go:141] libmachine: (functional-175435) Calling .GetSSHHostname
I1002 20:30:31.236237  506384 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:31.236776  506384 main.go:141] libmachine: (functional-175435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:ec:6f", ip: ""} in network mk-functional-175435: {Iface:virbr1 ExpiryTime:2025-10-02 21:28:01 +0000 UTC Type:0 Mac:52:54:00:b1:ec:6f Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:functional-175435 Clientid:01:52:54:00:b1:ec:6f}
I1002 20:30:31.236806  506384 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined IP address 192.168.39.180 and MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:31.237002  506384 main.go:141] libmachine: (functional-175435) Calling .GetSSHPort
I1002 20:30:31.237190  506384 main.go:141] libmachine: (functional-175435) Calling .GetSSHKeyPath
I1002 20:30:31.237363  506384 main.go:141] libmachine: (functional-175435) Calling .GetSSHUsername
I1002 20:30:31.237536  506384 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/functional-175435/id_rsa Username:docker}
I1002 20:30:31.330562  506384 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 20:30:33.388567  506384 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.057970249s)
W1002 20:30:33.388651  506384 cache_images.go:735] Failed to list images for profile functional-175435 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1002 20:30:33.380302    8960 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2025-10-02T20:30:33Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I1002 20:30:33.388720  506384 main.go:141] libmachine: Making call to close driver server
I1002 20:30:33.388740  506384 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:33.389071  506384 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:33.389092  506384 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 20:30:33.389100  506384 main.go:141] libmachine: Making call to close driver server
I1002 20:30:33.389107  506384 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:33.389107  506384 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
I1002 20:30:33.389420  506384 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:33.389477  506384 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 20:30:33.389433  506384 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                    
x
+
TestPreload (159.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1002 21:10:06.129415  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m32.239931734s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-105781 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-105781 image pull gcr.io/k8s-minikube/busybox: (3.403100517s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-105781
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-105781: (6.770766037s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105781 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105781 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.341309998s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-105781 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-02 21:11:49.427941567 +0000 UTC m=+3215.791511226
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-105781 -n test-preload-105781
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-105781 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-105781 logs -n 25: (1.001784069s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-091885 ssh -n multinode-091885-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ ssh     │ multinode-091885 ssh -n multinode-091885 sudo cat /home/docker/cp-test_multinode-091885-m03_multinode-091885.txt                                                                    │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ cp      │ multinode-091885 cp multinode-091885-m03:/home/docker/cp-test.txt multinode-091885-m02:/home/docker/cp-test_multinode-091885-m03_multinode-091885-m02.txt                           │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ ssh     │ multinode-091885 ssh -n multinode-091885-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ ssh     │ multinode-091885 ssh -n multinode-091885-m02 sudo cat /home/docker/cp-test_multinode-091885-m03_multinode-091885-m02.txt                                                            │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:57 UTC │
	│ node    │ multinode-091885 node stop m03                                                                                                                                                      │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 20:57 UTC │ 02 Oct 25 20:58 UTC │
	│ node    │ multinode-091885 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ node    │ list -p multinode-091885                                                                                                                                                            │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ stop    │ -p multinode-091885                                                                                                                                                                 │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 21:01 UTC │
	│ start   │ -p multinode-091885 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 21:01 UTC │ 02 Oct 25 21:04 UTC │
	│ node    │ list -p multinode-091885                                                                                                                                                            │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 21:04 UTC │                     │
	│ node    │ multinode-091885 node delete m03                                                                                                                                                    │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 21:04 UTC │ 02 Oct 25 21:04 UTC │
	│ stop    │ multinode-091885 stop                                                                                                                                                               │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 21:04 UTC │ 02 Oct 25 21:07 UTC │
	│ start   │ -p multinode-091885 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:08 UTC │
	│ node    │ list -p multinode-091885                                                                                                                                                            │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 21:08 UTC │                     │
	│ start   │ -p multinode-091885-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-091885-m02 │ jenkins │ v1.37.0 │ 02 Oct 25 21:08 UTC │                     │
	│ start   │ -p multinode-091885-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-091885-m03 │ jenkins │ v1.37.0 │ 02 Oct 25 21:08 UTC │ 02 Oct 25 21:09 UTC │
	│ node    │ add -p multinode-091885                                                                                                                                                             │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ delete  │ -p multinode-091885-m03                                                                                                                                                             │ multinode-091885-m03 │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │ 02 Oct 25 21:09 UTC │
	│ delete  │ -p multinode-091885                                                                                                                                                                 │ multinode-091885     │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │ 02 Oct 25 21:09 UTC │
	│ start   │ -p test-preload-105781 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-105781  │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │ 02 Oct 25 21:10 UTC │
	│ image   │ test-preload-105781 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-105781  │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │ 02 Oct 25 21:10 UTC │
	│ stop    │ -p test-preload-105781                                                                                                                                                              │ test-preload-105781  │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │ 02 Oct 25 21:10 UTC │
	│ start   │ -p test-preload-105781 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-105781  │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │ 02 Oct 25 21:11 UTC │
	│ image   │ test-preload-105781 image list                                                                                                                                                      │ test-preload-105781  │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:10:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:10:54.915855  527794 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:10:54.916167  527794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:54.916178  527794 out.go:374] Setting ErrFile to fd 2...
	I1002 21:10:54.916182  527794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:54.916364  527794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 21:10:54.916881  527794 out.go:368] Setting JSON to false
	I1002 21:10:54.917850  527794 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6790,"bootTime":1759432665,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:10:54.917958  527794 start.go:140] virtualization: kvm guest
	I1002 21:10:54.919406  527794 out.go:179] * [test-preload-105781] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:10:54.920567  527794 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:10:54.920576  527794 notify.go:220] Checking for updates...
	I1002 21:10:54.922232  527794 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:10:54.923182  527794 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 21:10:54.924028  527794 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 21:10:54.924971  527794 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:10:54.925799  527794 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:10:54.927053  527794 config.go:182] Loaded profile config "test-preload-105781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 21:10:54.927464  527794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:10:54.927541  527794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:10:54.940570  527794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46387
	I1002 21:10:54.941053  527794 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:10:54.941587  527794 main.go:141] libmachine: Using API Version  1
	I1002 21:10:54.941613  527794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:10:54.942011  527794 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:10:54.942221  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:10:54.943480  527794 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1002 21:10:54.944339  527794 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:10:54.944637  527794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:10:54.944672  527794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:10:54.957773  527794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46883
	I1002 21:10:54.958127  527794 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:10:54.958546  527794 main.go:141] libmachine: Using API Version  1
	I1002 21:10:54.958574  527794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:10:54.958934  527794 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:10:54.959123  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:10:54.990308  527794 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 21:10:54.991152  527794 start.go:304] selected driver: kvm2
	I1002 21:10:54.991166  527794 start.go:924] validating driver "kvm2" against &{Name:test-preload-105781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-105781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.138 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:10:54.991263  527794 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:10:54.991920  527794 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:10:54.991989  527794 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 21:10:55.005787  527794 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 21:10:55.005809  527794 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 21:10:55.018338  527794 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 21:10:55.018669  527794 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:10:55.018694  527794 cni.go:84] Creating CNI manager for ""
	I1002 21:10:55.018760  527794 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:10:55.018810  527794 start.go:348] cluster config:
	{Name:test-preload-105781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-105781 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.138 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:10:55.018907  527794 iso.go:125] acquiring lock: {Name:mk7586bb79dc7f44da54ee16895643204aac50ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:10:55.020470  527794 out.go:179] * Starting "test-preload-105781" primary control-plane node in "test-preload-105781" cluster
	I1002 21:10:55.021252  527794 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 21:10:55.494220  527794 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1002 21:10:55.494258  527794 cache.go:58] Caching tarball of preloaded images
	I1002 21:10:55.494463  527794 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 21:10:55.495877  527794 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1002 21:10:55.496865  527794 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 21:10:55.607121  527794 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1002 21:10:55.607165  527794 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:05.276950  527794 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1002 21:11:05.277093  527794 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/config.json ...
	I1002 21:11:05.277327  527794 start.go:360] acquireMachinesLock for test-preload-105781: {Name:mk9e7957cdce1fd4b26ce430105927ec465bcae0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 21:11:05.277388  527794 start.go:364] duration metric: took 37.828µs to acquireMachinesLock for "test-preload-105781"
	I1002 21:11:05.277406  527794 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:11:05.277411  527794 fix.go:54] fixHost starting: 
	I1002 21:11:05.277684  527794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:11:05.277750  527794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:11:05.290795  527794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42881
	I1002 21:11:05.291317  527794 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:11:05.291836  527794 main.go:141] libmachine: Using API Version  1
	I1002 21:11:05.291860  527794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:11:05.292189  527794 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:11:05.292382  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:11:05.292541  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetState
	I1002 21:11:05.294207  527794 fix.go:112] recreateIfNeeded on test-preload-105781: state=Stopped err=<nil>
	I1002 21:11:05.294242  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	W1002 21:11:05.294392  527794 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:11:05.295792  527794 out.go:252] * Restarting existing kvm2 VM for "test-preload-105781" ...
	I1002 21:11:05.295820  527794 main.go:141] libmachine: (test-preload-105781) Calling .Start
	I1002 21:11:05.295967  527794 main.go:141] libmachine: (test-preload-105781) starting domain...
	I1002 21:11:05.295986  527794 main.go:141] libmachine: (test-preload-105781) ensuring networks are active...
	I1002 21:11:05.296668  527794 main.go:141] libmachine: (test-preload-105781) Ensuring network default is active
	I1002 21:11:05.296999  527794 main.go:141] libmachine: (test-preload-105781) Ensuring network mk-test-preload-105781 is active
	I1002 21:11:05.297411  527794 main.go:141] libmachine: (test-preload-105781) getting domain XML...
	I1002 21:11:05.298379  527794 main.go:141] libmachine: (test-preload-105781) DBG | starting domain XML:
	I1002 21:11:05.298398  527794 main.go:141] libmachine: (test-preload-105781) DBG | <domain type='kvm'>
	I1002 21:11:05.298410  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <name>test-preload-105781</name>
	I1002 21:11:05.298430  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <uuid>7e2be3e9-6f73-4ffb-84c1-70c8fcd313b8</uuid>
	I1002 21:11:05.298442  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <memory unit='KiB'>3145728</memory>
	I1002 21:11:05.298455  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1002 21:11:05.298466  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 21:11:05.298477  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <os>
	I1002 21:11:05.298490  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 21:11:05.298497  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <boot dev='cdrom'/>
	I1002 21:11:05.298510  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <boot dev='hd'/>
	I1002 21:11:05.298522  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <bootmenu enable='no'/>
	I1002 21:11:05.298531  527794 main.go:141] libmachine: (test-preload-105781) DBG |   </os>
	I1002 21:11:05.298538  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <features>
	I1002 21:11:05.298549  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <acpi/>
	I1002 21:11:05.298559  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <apic/>
	I1002 21:11:05.298567  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <pae/>
	I1002 21:11:05.298572  527794 main.go:141] libmachine: (test-preload-105781) DBG |   </features>
	I1002 21:11:05.298595  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 21:11:05.298615  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <clock offset='utc'/>
	I1002 21:11:05.298625  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 21:11:05.298653  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <on_reboot>restart</on_reboot>
	I1002 21:11:05.298666  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <on_crash>destroy</on_crash>
	I1002 21:11:05.298673  527794 main.go:141] libmachine: (test-preload-105781) DBG |   <devices>
	I1002 21:11:05.298686  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 21:11:05.298693  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <disk type='file' device='cdrom'>
	I1002 21:11:05.298700  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <driver name='qemu' type='raw'/>
	I1002 21:11:05.298728  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/boot2docker.iso'/>
	I1002 21:11:05.298746  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 21:11:05.298756  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <readonly/>
	I1002 21:11:05.298770  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 21:11:05.298779  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </disk>
	I1002 21:11:05.298793  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <disk type='file' device='disk'>
	I1002 21:11:05.298808  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 21:11:05.298826  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/test-preload-105781.rawdisk'/>
	I1002 21:11:05.298837  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <target dev='hda' bus='virtio'/>
	I1002 21:11:05.298849  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 21:11:05.298858  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </disk>
	I1002 21:11:05.298871  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 21:11:05.298894  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 21:11:05.298907  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </controller>
	I1002 21:11:05.298918  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 21:11:05.298931  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 21:11:05.298943  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 21:11:05.298979  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </controller>
	I1002 21:11:05.299000  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <interface type='network'>
	I1002 21:11:05.299012  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <mac address='52:54:00:4a:da:2a'/>
	I1002 21:11:05.299030  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <source network='mk-test-preload-105781'/>
	I1002 21:11:05.299040  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <model type='virtio'/>
	I1002 21:11:05.299050  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 21:11:05.299058  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </interface>
	I1002 21:11:05.299067  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <interface type='network'>
	I1002 21:11:05.299076  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <mac address='52:54:00:e3:7b:ee'/>
	I1002 21:11:05.299084  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <source network='default'/>
	I1002 21:11:05.299092  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <model type='virtio'/>
	I1002 21:11:05.299106  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 21:11:05.299115  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </interface>
	I1002 21:11:05.299129  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <serial type='pty'>
	I1002 21:11:05.299138  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <target type='isa-serial' port='0'>
	I1002 21:11:05.299148  527794 main.go:141] libmachine: (test-preload-105781) DBG |         <model name='isa-serial'/>
	I1002 21:11:05.299157  527794 main.go:141] libmachine: (test-preload-105781) DBG |       </target>
	I1002 21:11:05.299167  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </serial>
	I1002 21:11:05.299195  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <console type='pty'>
	I1002 21:11:05.299217  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <target type='serial' port='0'/>
	I1002 21:11:05.299239  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </console>
	I1002 21:11:05.299254  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <input type='mouse' bus='ps2'/>
	I1002 21:11:05.299272  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 21:11:05.299287  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <audio id='1' type='none'/>
	I1002 21:11:05.299300  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <memballoon model='virtio'>
	I1002 21:11:05.299313  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 21:11:05.299322  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </memballoon>
	I1002 21:11:05.299327  527794 main.go:141] libmachine: (test-preload-105781) DBG |     <rng model='virtio'>
	I1002 21:11:05.299335  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <backend model='random'>/dev/random</backend>
	I1002 21:11:05.299341  527794 main.go:141] libmachine: (test-preload-105781) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 21:11:05.299353  527794 main.go:141] libmachine: (test-preload-105781) DBG |     </rng>
	I1002 21:11:05.299360  527794 main.go:141] libmachine: (test-preload-105781) DBG |   </devices>
	I1002 21:11:05.299379  527794 main.go:141] libmachine: (test-preload-105781) DBG | </domain>
	I1002 21:11:05.299392  527794 main.go:141] libmachine: (test-preload-105781) DBG | 
	I1002 21:11:06.537807  527794 main.go:141] libmachine: (test-preload-105781) waiting for domain to start...
	I1002 21:11:06.539111  527794 main.go:141] libmachine: (test-preload-105781) domain is now running
	I1002 21:11:06.539138  527794 main.go:141] libmachine: (test-preload-105781) waiting for IP...
	I1002 21:11:06.539950  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:06.540505  527794 main.go:141] libmachine: (test-preload-105781) found domain IP: 192.168.39.138
	I1002 21:11:06.540533  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has current primary IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:06.540541  527794 main.go:141] libmachine: (test-preload-105781) reserving static IP address...
	I1002 21:11:06.541024  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "test-preload-105781", mac: "52:54:00:4a:da:2a", ip: "192.168.39.138"} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:09:27 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:06.541048  527794 main.go:141] libmachine: (test-preload-105781) reserved static IP address 192.168.39.138 for domain test-preload-105781
	I1002 21:11:06.541065  527794 main.go:141] libmachine: (test-preload-105781) DBG | skip adding static IP to network mk-test-preload-105781 - found existing host DHCP lease matching {name: "test-preload-105781", mac: "52:54:00:4a:da:2a", ip: "192.168.39.138"}
	I1002 21:11:06.541091  527794 main.go:141] libmachine: (test-preload-105781) DBG | Getting to WaitForSSH function...
	I1002 21:11:06.541107  527794 main.go:141] libmachine: (test-preload-105781) waiting for SSH...
	I1002 21:11:06.543215  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:06.543545  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:09:27 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:06.543591  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:06.543722  527794 main.go:141] libmachine: (test-preload-105781) DBG | Using SSH client type: external
	I1002 21:11:06.543773  527794 main.go:141] libmachine: (test-preload-105781) DBG | Using SSH private key: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa (-rw-------)
	I1002 21:11:06.543812  527794 main.go:141] libmachine: (test-preload-105781) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 21:11:06.543829  527794 main.go:141] libmachine: (test-preload-105781) DBG | About to run SSH command:
	I1002 21:11:06.543838  527794 main.go:141] libmachine: (test-preload-105781) DBG | exit 0
	I1002 21:11:16.828494  527794 main.go:141] libmachine: (test-preload-105781) DBG | SSH cmd err, output: exit status 255: 
	I1002 21:11:16.828525  527794 main.go:141] libmachine: (test-preload-105781) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1002 21:11:16.828535  527794 main.go:141] libmachine: (test-preload-105781) DBG | command : exit 0
	I1002 21:11:16.828541  527794 main.go:141] libmachine: (test-preload-105781) DBG | err     : exit status 255
	I1002 21:11:16.828550  527794 main.go:141] libmachine: (test-preload-105781) DBG | output  : 
	I1002 21:11:19.830648  527794 main.go:141] libmachine: (test-preload-105781) DBG | Getting to WaitForSSH function...
	I1002 21:11:19.833550  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:19.834002  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:19.834036  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:19.834220  527794 main.go:141] libmachine: (test-preload-105781) DBG | Using SSH client type: external
	I1002 21:11:19.834247  527794 main.go:141] libmachine: (test-preload-105781) DBG | Using SSH private key: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa (-rw-------)
	I1002 21:11:19.834271  527794 main.go:141] libmachine: (test-preload-105781) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 21:11:19.834283  527794 main.go:141] libmachine: (test-preload-105781) DBG | About to run SSH command:
	I1002 21:11:19.834293  527794 main.go:141] libmachine: (test-preload-105781) DBG | exit 0
	I1002 21:11:19.964592  527794 main.go:141] libmachine: (test-preload-105781) DBG | SSH cmd err, output: <nil>: 
	I1002 21:11:19.965011  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetConfigRaw
	I1002 21:11:19.965607  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetIP
	I1002 21:11:19.968410  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:19.968838  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:19.968868  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:19.969149  527794 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/config.json ...
	I1002 21:11:19.969412  527794 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:19.969434  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:11:19.969644  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:19.971906  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:19.972248  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:19.972271  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:19.972411  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:19.972587  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:19.972729  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:19.972847  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:19.972973  527794 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:19.973260  527794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.138 22 <nil> <nil>}
	I1002 21:11:19.973275  527794 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:20.079797  527794 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 21:11:20.079829  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetMachineName
	I1002 21:11:20.080094  527794 buildroot.go:166] provisioning hostname "test-preload-105781"
	I1002 21:11:20.080122  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetMachineName
	I1002 21:11:20.080329  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:20.082970  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.083287  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:20.083328  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.083463  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:20.083666  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:20.083852  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:20.083985  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:20.084158  527794 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:20.084358  527794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.138 22 <nil> <nil>}
	I1002 21:11:20.084373  527794 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-105781 && echo "test-preload-105781" | sudo tee /etc/hostname
	I1002 21:11:20.205876  527794 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-105781
	
	I1002 21:11:20.205919  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:20.208977  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.209389  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:20.209426  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.209659  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:20.209879  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:20.210047  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:20.210161  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:20.210337  527794 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:20.210534  527794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.138 22 <nil> <nil>}
	I1002 21:11:20.210552  527794 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-105781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-105781/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-105781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:20.325109  527794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:20.325146  527794 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21682-492630/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-492630/.minikube}
	I1002 21:11:20.325191  527794 buildroot.go:174] setting up certificates
	I1002 21:11:20.325209  527794 provision.go:84] configureAuth start
	I1002 21:11:20.325223  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetMachineName
	I1002 21:11:20.325456  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetIP
	I1002 21:11:20.328379  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.328731  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:20.328751  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.328921  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:20.331264  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.331634  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:20.331666  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.331844  527794 provision.go:143] copyHostCerts
	I1002 21:11:20.331904  527794 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem, removing ...
	I1002 21:11:20.331929  527794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem
	I1002 21:11:20.332019  527794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem (1078 bytes)
	I1002 21:11:20.332183  527794 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem, removing ...
	I1002 21:11:20.332198  527794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem
	I1002 21:11:20.332242  527794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem (1123 bytes)
	I1002 21:11:20.332342  527794 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem, removing ...
	I1002 21:11:20.332353  527794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem
	I1002 21:11:20.332390  527794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem (1675 bytes)
	I1002 21:11:20.332477  527794 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem org=jenkins.test-preload-105781 san=[127.0.0.1 192.168.39.138 localhost minikube test-preload-105781]
	I1002 21:11:20.466560  527794 provision.go:177] copyRemoteCerts
	I1002 21:11:20.466617  527794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:20.466642  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:20.469566  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.469953  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:20.469994  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.470152  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:20.470381  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:20.470539  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:20.470682  527794 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa Username:docker}
	I1002 21:11:20.554056  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:11:20.581145  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 21:11:20.607439  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:20.633856  527794 provision.go:87] duration metric: took 308.62893ms to configureAuth
	I1002 21:11:20.633890  527794 buildroot.go:189] setting minikube options for container-runtime
	I1002 21:11:20.634058  527794 config.go:182] Loaded profile config "test-preload-105781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 21:11:20.634134  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:20.637008  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.637416  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:20.637440  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.637642  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:20.637835  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:20.637983  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:20.638087  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:20.638242  527794 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:20.638464  527794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.138 22 <nil> <nil>}
	I1002 21:11:20.638482  527794 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:20.874088  527794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:20.874128  527794 machine.go:96] duration metric: took 904.698694ms to provisionDockerMachine
	I1002 21:11:20.874146  527794 start.go:293] postStartSetup for "test-preload-105781" (driver="kvm2")
	I1002 21:11:20.874163  527794 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:20.874245  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:11:20.874657  527794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:20.874721  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:20.877733  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.878169  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:20.878190  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:20.878371  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:20.878575  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:20.878748  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:20.878898  527794 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa Username:docker}
	I1002 21:11:20.962820  527794 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:20.967529  527794 info.go:137] Remote host: Buildroot 2025.02
	I1002 21:11:20.967557  527794 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/addons for local assets ...
	I1002 21:11:20.967647  527794 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/files for local assets ...
	I1002 21:11:20.967758  527794 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem -> 4975692.pem in /etc/ssl/certs
	I1002 21:11:20.967855  527794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:20.978565  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem --> /etc/ssl/certs/4975692.pem (1708 bytes)
	I1002 21:11:21.005239  527794 start.go:296] duration metric: took 131.076705ms for postStartSetup
	I1002 21:11:21.005280  527794 fix.go:56] duration metric: took 15.727867331s for fixHost
	I1002 21:11:21.005308  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:21.008228  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:21.008590  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:21.008615  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:21.008808  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:21.008960  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:21.009106  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:21.009245  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:21.009386  527794 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:21.009668  527794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.138 22 <nil> <nil>}
	I1002 21:11:21.009694  527794 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 21:11:21.116696  527794 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759439481.074433585
	
	I1002 21:11:21.116740  527794 fix.go:216] guest clock: 1759439481.074433585
	I1002 21:11:21.116750  527794 fix.go:229] Guest: 2025-10-02 21:11:21.074433585 +0000 UTC Remote: 2025-10-02 21:11:21.005285875 +0000 UTC m=+26.127439539 (delta=69.14771ms)
	I1002 21:11:21.116776  527794 fix.go:200] guest clock delta is within tolerance: 69.14771ms
	I1002 21:11:21.116783  527794 start.go:83] releasing machines lock for "test-preload-105781", held for 15.839383779s
	I1002 21:11:21.116810  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:11:21.117071  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetIP
	I1002 21:11:21.119917  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:21.120244  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:21.120272  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:21.120484  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:11:21.120983  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:11:21.121156  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:11:21.121241  527794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:21.121295  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:21.121346  527794 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:21.121371  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:21.124243  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:21.124270  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:21.124729  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:21.124766  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:21.124788  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:21.124806  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:21.124997  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:21.125206  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:21.125210  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:21.125412  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:21.125420  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:21.125604  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:21.125613  527794 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa Username:docker}
	I1002 21:11:21.125761  527794 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa Username:docker}
	I1002 21:11:21.236947  527794 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:21.243284  527794 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:21.394070  527794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:21.400478  527794 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:21.400559  527794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:21.419337  527794 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:21.419369  527794 start.go:495] detecting cgroup driver to use...
	I1002 21:11:21.419444  527794 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:21.439024  527794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:21.455528  527794 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:21.455597  527794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:21.472748  527794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:21.488761  527794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:21.631575  527794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:21.838022  527794 docker.go:234] disabling docker service ...
	I1002 21:11:21.838109  527794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:21.854230  527794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:21.869390  527794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:22.025953  527794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:22.164292  527794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:22.179919  527794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:22.200961  527794 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 21:11:22.201029  527794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:22.215243  527794 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:11:22.215307  527794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:22.229187  527794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:22.243492  527794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:22.257554  527794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:22.270995  527794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:22.285255  527794 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:22.307524  527794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:22.321646  527794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:22.333939  527794 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 21:11:22.334001  527794 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 21:11:22.356376  527794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:22.369760  527794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:22.515579  527794 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:22.625927  527794 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:22.626011  527794 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:22.631690  527794 start.go:563] Will wait 60s for crictl version
	I1002 21:11:22.631774  527794 ssh_runner.go:195] Run: which crictl
	I1002 21:11:22.635664  527794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 21:11:22.670869  527794 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 21:11:22.670968  527794 ssh_runner.go:195] Run: crio --version
	I1002 21:11:22.698849  527794 ssh_runner.go:195] Run: crio --version
	I1002 21:11:22.728560  527794 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1002 21:11:22.729579  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetIP
	I1002 21:11:22.732778  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:22.733200  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:22.733227  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:22.733390  527794 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:22.737415  527794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:22.750742  527794 kubeadm.go:883] updating cluster {Name:test-preload-105781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-105781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.138 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:22.750855  527794 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 21:11:22.750901  527794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:22.784223  527794 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1002 21:11:22.784292  527794 ssh_runner.go:195] Run: which lz4
	I1002 21:11:22.788206  527794 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 21:11:22.792464  527794 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 21:11:22.792486  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1002 21:11:24.094172  527794 crio.go:462] duration metric: took 1.305986994s to copy over tarball
	I1002 21:11:24.094249  527794 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 21:11:25.719445  527794 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.625146527s)
	I1002 21:11:25.719484  527794 crio.go:469] duration metric: took 1.625277576s to extract the tarball
	I1002 21:11:25.719492  527794 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 21:11:25.759140  527794 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:25.801652  527794 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:25.801678  527794 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:25.801686  527794 kubeadm.go:934] updating node { 192.168.39.138 8443 v1.32.0 crio true true} ...
	I1002 21:11:25.801812  527794 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-105781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-105781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:25.801893  527794 ssh_runner.go:195] Run: crio config
	I1002 21:11:25.843969  527794 cni.go:84] Creating CNI manager for ""
	I1002 21:11:25.844002  527794 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:11:25.844026  527794 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:25.844051  527794 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.138 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-105781 NodeName:test-preload-105781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:25.844169  527794 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-105781"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.138"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.138"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:25.844233  527794 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1002 21:11:25.856034  527794 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:25.856092  527794 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:11:25.867912  527794 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1002 21:11:25.887140  527794 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:25.905396  527794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1002 21:11:25.924360  527794 ssh_runner.go:195] Run: grep 192.168.39.138	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:25.928356  527794 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:25.941659  527794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:26.078597  527794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:26.110353  527794 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781 for IP: 192.168.39.138
	I1002 21:11:26.110375  527794 certs.go:195] generating shared ca certs ...
	I1002 21:11:26.110393  527794 certs.go:227] acquiring lock for ca certs: {Name:mk99bb18e623cf4cf4a4efda3dab88668aa481a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:26.110619  527794 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key
	I1002 21:11:26.110694  527794 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key
	I1002 21:11:26.110729  527794 certs.go:257] generating profile certs ...
	I1002 21:11:26.110829  527794 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/client.key
	I1002 21:11:26.110886  527794 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/apiserver.key.cec70fc9
	I1002 21:11:26.110920  527794 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/proxy-client.key
	I1002 21:11:26.111025  527794 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569.pem (1338 bytes)
	W1002 21:11:26.111059  527794 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:26.111069  527794 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:26.111090  527794 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:11:26.111110  527794 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:26.111133  527794 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:26.111169  527794 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem (1708 bytes)
	I1002 21:11:26.111986  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:26.145190  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:26.176107  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:26.202151  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:26.227845  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 21:11:26.253479  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:11:26.280183  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:26.307501  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:11:26.334082  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:26.359791  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569.pem --> /usr/share/ca-certificates/497569.pem (1338 bytes)
	I1002 21:11:26.385289  527794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem --> /usr/share/ca-certificates/4975692.pem (1708 bytes)
	I1002 21:11:26.411195  527794 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:26.429087  527794 ssh_runner.go:195] Run: openssl version
	I1002 21:11:26.434630  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497569.pem && ln -fs /usr/share/ca-certificates/497569.pem /etc/ssl/certs/497569.pem"
	I1002 21:11:26.445664  527794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497569.pem
	I1002 21:11:26.450482  527794 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:27 /usr/share/ca-certificates/497569.pem
	I1002 21:11:26.450533  527794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497569.pem
	I1002 21:11:26.457172  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497569.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:26.468439  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4975692.pem && ln -fs /usr/share/ca-certificates/4975692.pem /etc/ssl/certs/4975692.pem"
	I1002 21:11:26.479541  527794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4975692.pem
	I1002 21:11:26.484369  527794 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:27 /usr/share/ca-certificates/4975692.pem
	I1002 21:11:26.484411  527794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4975692.pem
	I1002 21:11:26.490691  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4975692.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:26.501786  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:26.513471  527794 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:26.518270  527794 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:26.518319  527794 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:26.524772  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:26.536122  527794 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:26.540868  527794 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:11:26.547414  527794 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:11:26.553804  527794 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:11:26.560357  527794 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:11:26.566892  527794 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:11:26.573280  527794 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:11:26.579730  527794 kubeadm.go:400] StartCluster: {Name:test-preload-105781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-105781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.138 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:26.579826  527794 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:26.579889  527794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:26.614606  527794 cri.go:89] found id: ""
	I1002 21:11:26.614695  527794 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:26.627085  527794 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:11:26.627104  527794 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:11:26.627149  527794 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:11:26.637736  527794 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:26.638177  527794 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-105781" does not appear in /home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 21:11:26.638285  527794 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-492630/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-105781" cluster setting kubeconfig missing "test-preload-105781" context setting]
	I1002 21:11:26.638575  527794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/kubeconfig: {Name:mk4bbb10e20496c232fa2a76298e716d67d36cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:26.639154  527794 kapi.go:59] client config for test-preload-105781: &rest.Config{Host:"https://192.168.39.138:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/client.key", CAFile:"/home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:11:26.639556  527794 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:11:26.639570  527794 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:11:26.639575  527794 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:11:26.639579  527794 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:11:26.639583  527794 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:11:26.639974  527794 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:11:26.649719  527794 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.138
	I1002 21:11:26.649749  527794 kubeadm.go:1160] stopping kube-system containers ...
	I1002 21:11:26.649764  527794 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 21:11:26.649806  527794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:26.684010  527794 cri.go:89] found id: ""
	I1002 21:11:26.684066  527794 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 21:11:26.702051  527794 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:26.712603  527794 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:26.712620  527794 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:26.712669  527794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:26.722305  527794 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:26.722355  527794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:26.732487  527794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:26.742108  527794 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:26.742156  527794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:26.752144  527794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:26.761718  527794 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:26.761753  527794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:26.771808  527794 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:26.781262  527794 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:26.781294  527794 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:26.791229  527794 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:26.801466  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:11:26.850246  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:11:27.814501  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:11:28.053133  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:11:28.121191  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:11:28.206132  527794 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:11:28.206210  527794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:11:28.707214  527794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:11:29.207338  527794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:11:29.706311  527794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:11:30.206581  527794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:11:30.707337  527794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:11:30.734094  527794 api_server.go:72] duration metric: took 2.527948549s to wait for apiserver process to appear ...
	I1002 21:11:30.734123  527794 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:11:30.734155  527794 api_server.go:253] Checking apiserver healthz at https://192.168.39.138:8443/healthz ...
	I1002 21:11:32.824570  527794 api_server.go:279] https://192.168.39.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 21:11:32.824604  527794 api_server.go:103] status: https://192.168.39.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 21:11:32.824618  527794 api_server.go:253] Checking apiserver healthz at https://192.168.39.138:8443/healthz ...
	I1002 21:11:32.847434  527794 api_server.go:279] https://192.168.39.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 21:11:32.847465  527794 api_server.go:103] status: https://192.168.39.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 21:11:33.234858  527794 api_server.go:253] Checking apiserver healthz at https://192.168.39.138:8443/healthz ...
	I1002 21:11:33.241669  527794 api_server.go:279] https://192.168.39.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:11:33.241695  527794 api_server.go:103] status: https://192.168.39.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:11:33.734322  527794 api_server.go:253] Checking apiserver healthz at https://192.168.39.138:8443/healthz ...
	I1002 21:11:33.740867  527794 api_server.go:279] https://192.168.39.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:11:33.740904  527794 api_server.go:103] status: https://192.168.39.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:11:34.234533  527794 api_server.go:253] Checking apiserver healthz at https://192.168.39.138:8443/healthz ...
	I1002 21:11:34.242729  527794 api_server.go:279] https://192.168.39.138:8443/healthz returned 200:
	ok
	I1002 21:11:34.250623  527794 api_server.go:141] control plane version: v1.32.0
	I1002 21:11:34.250661  527794 api_server.go:131] duration metric: took 3.516517955s to wait for apiserver health ...
	I1002 21:11:34.250675  527794 cni.go:84] Creating CNI manager for ""
	I1002 21:11:34.250683  527794 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:11:34.251898  527794 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 21:11:34.252827  527794 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 21:11:34.271232  527794 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 21:11:34.291684  527794 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:11:34.295517  527794 system_pods.go:59] 7 kube-system pods found
	I1002 21:11:34.295553  527794 system_pods.go:61] "coredns-668d6bf9bc-zl8zp" [4282759a-512a-4c97-8733-8d7ba955fecd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:11:34.295562  527794 system_pods.go:61] "etcd-test-preload-105781" [cfda92e9-0131-47cb-96be-3d7a627501a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:11:34.295569  527794 system_pods.go:61] "kube-apiserver-test-preload-105781" [996542ee-fa90-4c26-b452-0c238df99a00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:11:34.295577  527794 system_pods.go:61] "kube-controller-manager-test-preload-105781" [e0b946be-9a7e-4b1a-9906-5099138e7ca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:11:34.295581  527794 system_pods.go:61] "kube-proxy-njwsk" [f6c64448-6037-4e83-b892-6e457d6d832f] Running
	I1002 21:11:34.295586  527794 system_pods.go:61] "kube-scheduler-test-preload-105781" [21c820f9-1927-4457-90cb-86bc59801a4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:11:34.295590  527794 system_pods.go:61] "storage-provisioner" [89bb4083-ca27-40f4-8d74-ef5cbfc483b6] Running
	I1002 21:11:34.295595  527794 system_pods.go:74] duration metric: took 3.89116ms to wait for pod list to return data ...
	I1002 21:11:34.295601  527794 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:11:34.299646  527794 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 21:11:34.299666  527794 node_conditions.go:123] node cpu capacity is 2
	I1002 21:11:34.299680  527794 node_conditions.go:105] duration metric: took 4.073388ms to run NodePressure ...
	I1002 21:11:34.299731  527794 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:11:34.555939  527794 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 21:11:34.559106  527794 kubeadm.go:743] kubelet initialised
	I1002 21:11:34.559126  527794 kubeadm.go:744] duration metric: took 3.160518ms waiting for restarted kubelet to initialise ...
	I1002 21:11:34.559142  527794 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:11:34.573393  527794 ops.go:34] apiserver oom_adj: -16
	I1002 21:11:34.573413  527794 kubeadm.go:601] duration metric: took 7.946298844s to restartPrimaryControlPlane
	I1002 21:11:34.573429  527794 kubeadm.go:402] duration metric: took 7.993700913s to StartCluster
	I1002 21:11:34.573449  527794 settings.go:142] acquiring lock: {Name:mk713e1c8098ab4e764fe2cb637b0408c7b1a3ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:34.573522  527794 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 21:11:34.574103  527794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/kubeconfig: {Name:mk4bbb10e20496c232fa2a76298e716d67d36cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:34.574333  527794 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.138 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:34.574385  527794 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:11:34.574488  527794 addons.go:69] Setting storage-provisioner=true in profile "test-preload-105781"
	I1002 21:11:34.574513  527794 addons.go:238] Setting addon storage-provisioner=true in "test-preload-105781"
	I1002 21:11:34.574512  527794 addons.go:69] Setting default-storageclass=true in profile "test-preload-105781"
	W1002 21:11:34.574525  527794 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:11:34.574536  527794 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-105781"
	I1002 21:11:34.574544  527794 config.go:182] Loaded profile config "test-preload-105781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 21:11:34.574571  527794 host.go:66] Checking if "test-preload-105781" exists ...
	I1002 21:11:34.575015  527794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:11:34.575069  527794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:11:34.575015  527794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:11:34.575151  527794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:11:34.575769  527794 out.go:179] * Verifying Kubernetes components...
	I1002 21:11:34.576932  527794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:34.589259  527794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41581
	I1002 21:11:34.589911  527794 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:11:34.590380  527794 main.go:141] libmachine: Using API Version  1
	I1002 21:11:34.590402  527794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:11:34.590750  527794 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:11:34.590948  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetState
	I1002 21:11:34.593348  527794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I1002 21:11:34.593394  527794 kapi.go:59] client config for test-preload-105781: &rest.Config{Host:"https://192.168.39.138:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/client.key", CAFile:"/home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:11:34.593795  527794 addons.go:238] Setting addon default-storageclass=true in "test-preload-105781"
	W1002 21:11:34.593819  527794 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:11:34.593829  527794 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:11:34.593851  527794 host.go:66] Checking if "test-preload-105781" exists ...
	I1002 21:11:34.594139  527794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:11:34.594195  527794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:11:34.594283  527794 main.go:141] libmachine: Using API Version  1
	I1002 21:11:34.594304  527794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:11:34.594640  527794 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:11:34.595076  527794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:11:34.595103  527794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:11:34.607415  527794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34559
	I1002 21:11:34.607894  527794 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:11:34.607939  527794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
	I1002 21:11:34.608286  527794 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:11:34.608408  527794 main.go:141] libmachine: Using API Version  1
	I1002 21:11:34.608433  527794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:11:34.608658  527794 main.go:141] libmachine: Using API Version  1
	I1002 21:11:34.608672  527794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:11:34.608734  527794 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:11:34.608965  527794 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:11:34.609121  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetState
	I1002 21:11:34.609315  527794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:11:34.609346  527794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:11:34.610855  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:11:34.612258  527794 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:11:34.613237  527794 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:11:34.613254  527794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:11:34.613276  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:34.615986  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:34.616510  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:34.616536  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:34.616770  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:34.616938  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:34.617115  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:34.617266  527794 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa Username:docker}
	I1002 21:11:34.623277  527794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I1002 21:11:34.623814  527794 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:11:34.624257  527794 main.go:141] libmachine: Using API Version  1
	I1002 21:11:34.624280  527794 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:11:34.624634  527794 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:11:34.624794  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetState
	I1002 21:11:34.626448  527794 main.go:141] libmachine: (test-preload-105781) Calling .DriverName
	I1002 21:11:34.626640  527794 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:11:34.626657  527794 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:11:34.626680  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHHostname
	I1002 21:11:34.629130  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:34.629584  527794 main.go:141] libmachine: (test-preload-105781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:da:2a", ip: ""} in network mk-test-preload-105781: {Iface:virbr1 ExpiryTime:2025-10-02 22:11:16 +0000 UTC Type:0 Mac:52:54:00:4a:da:2a Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:test-preload-105781 Clientid:01:52:54:00:4a:da:2a}
	I1002 21:11:34.629614  527794 main.go:141] libmachine: (test-preload-105781) DBG | domain test-preload-105781 has defined IP address 192.168.39.138 and MAC address 52:54:00:4a:da:2a in network mk-test-preload-105781
	I1002 21:11:34.629883  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHPort
	I1002 21:11:34.630025  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHKeyPath
	I1002 21:11:34.630173  527794 main.go:141] libmachine: (test-preload-105781) Calling .GetSSHUsername
	I1002 21:11:34.630288  527794 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/test-preload-105781/id_rsa Username:docker}
	I1002 21:11:34.794662  527794 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:34.815484  527794 node_ready.go:35] waiting up to 6m0s for node "test-preload-105781" to be "Ready" ...
	I1002 21:11:34.817970  527794 node_ready.go:49] node "test-preload-105781" is "Ready"
	I1002 21:11:34.817992  527794 node_ready.go:38] duration metric: took 2.458026ms for node "test-preload-105781" to be "Ready" ...
	I1002 21:11:34.818009  527794 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:11:34.818066  527794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:11:34.839195  527794 api_server.go:72] duration metric: took 264.832975ms to wait for apiserver process to appear ...
	I1002 21:11:34.839213  527794 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:11:34.839230  527794 api_server.go:253] Checking apiserver healthz at https://192.168.39.138:8443/healthz ...
	I1002 21:11:34.844754  527794 api_server.go:279] https://192.168.39.138:8443/healthz returned 200:
	ok
	I1002 21:11:34.845854  527794 api_server.go:141] control plane version: v1.32.0
	I1002 21:11:34.845875  527794 api_server.go:131] duration metric: took 6.654312ms to wait for apiserver health ...
	I1002 21:11:34.845885  527794 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:11:34.850279  527794 system_pods.go:59] 7 kube-system pods found
	I1002 21:11:34.850313  527794 system_pods.go:61] "coredns-668d6bf9bc-zl8zp" [4282759a-512a-4c97-8733-8d7ba955fecd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:11:34.850323  527794 system_pods.go:61] "etcd-test-preload-105781" [cfda92e9-0131-47cb-96be-3d7a627501a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:11:34.850334  527794 system_pods.go:61] "kube-apiserver-test-preload-105781" [996542ee-fa90-4c26-b452-0c238df99a00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:11:34.850345  527794 system_pods.go:61] "kube-controller-manager-test-preload-105781" [e0b946be-9a7e-4b1a-9906-5099138e7ca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:11:34.850355  527794 system_pods.go:61] "kube-proxy-njwsk" [f6c64448-6037-4e83-b892-6e457d6d832f] Running
	I1002 21:11:34.850365  527794 system_pods.go:61] "kube-scheduler-test-preload-105781" [21c820f9-1927-4457-90cb-86bc59801a4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:11:34.850371  527794 system_pods.go:61] "storage-provisioner" [89bb4083-ca27-40f4-8d74-ef5cbfc483b6] Running
	I1002 21:11:34.850380  527794 system_pods.go:74] duration metric: took 4.486752ms to wait for pod list to return data ...
	I1002 21:11:34.850393  527794 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:11:34.852748  527794 default_sa.go:45] found service account: "default"
	I1002 21:11:34.852766  527794 default_sa.go:55] duration metric: took 2.364015ms for default service account to be created ...
	I1002 21:11:34.852776  527794 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:11:34.856195  527794 system_pods.go:86] 7 kube-system pods found
	I1002 21:11:34.856228  527794 system_pods.go:89] "coredns-668d6bf9bc-zl8zp" [4282759a-512a-4c97-8733-8d7ba955fecd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:11:34.856239  527794 system_pods.go:89] "etcd-test-preload-105781" [cfda92e9-0131-47cb-96be-3d7a627501a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:11:34.856250  527794 system_pods.go:89] "kube-apiserver-test-preload-105781" [996542ee-fa90-4c26-b452-0c238df99a00] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:11:34.856277  527794 system_pods.go:89] "kube-controller-manager-test-preload-105781" [e0b946be-9a7e-4b1a-9906-5099138e7ca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:11:34.856286  527794 system_pods.go:89] "kube-proxy-njwsk" [f6c64448-6037-4e83-b892-6e457d6d832f] Running
	I1002 21:11:34.856300  527794 system_pods.go:89] "kube-scheduler-test-preload-105781" [21c820f9-1927-4457-90cb-86bc59801a4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:11:34.856308  527794 system_pods.go:89] "storage-provisioner" [89bb4083-ca27-40f4-8d74-ef5cbfc483b6] Running
	I1002 21:11:34.856318  527794 system_pods.go:126] duration metric: took 3.535295ms to wait for k8s-apps to be running ...
	I1002 21:11:34.856331  527794 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:11:34.856388  527794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:11:34.873798  527794 system_svc.go:56] duration metric: took 17.45963ms WaitForService to wait for kubelet
	I1002 21:11:34.873825  527794 kubeadm.go:586] duration metric: took 299.462266ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:34.873846  527794 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:11:34.877738  527794 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 21:11:34.877758  527794 node_conditions.go:123] node cpu capacity is 2
	I1002 21:11:34.877770  527794 node_conditions.go:105] duration metric: took 3.915414ms to run NodePressure ...
	I1002 21:11:34.877783  527794 start.go:241] waiting for startup goroutines ...
	I1002 21:11:34.978855  527794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:11:34.988306  527794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:11:35.587840  527794 main.go:141] libmachine: Making call to close driver server
	I1002 21:11:35.587867  527794 main.go:141] libmachine: (test-preload-105781) Calling .Close
	I1002 21:11:35.587885  527794 main.go:141] libmachine: Making call to close driver server
	I1002 21:11:35.587907  527794 main.go:141] libmachine: (test-preload-105781) Calling .Close
	I1002 21:11:35.588180  527794 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:11:35.588195  527794 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:11:35.588201  527794 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:11:35.588208  527794 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:11:35.588211  527794 main.go:141] libmachine: Making call to close driver server
	I1002 21:11:35.588226  527794 main.go:141] libmachine: (test-preload-105781) DBG | Closing plugin on server side
	I1002 21:11:35.588262  527794 main.go:141] libmachine: Making call to close driver server
	I1002 21:11:35.588285  527794 main.go:141] libmachine: (test-preload-105781) Calling .Close
	I1002 21:11:35.588300  527794 main.go:141] libmachine: (test-preload-105781) Calling .Close
	I1002 21:11:35.588534  527794 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:11:35.588548  527794 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:11:35.588554  527794 main.go:141] libmachine: (test-preload-105781) DBG | Closing plugin on server side
	I1002 21:11:35.588581  527794 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:11:35.588592  527794 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:11:35.594627  527794 main.go:141] libmachine: Making call to close driver server
	I1002 21:11:35.594662  527794 main.go:141] libmachine: (test-preload-105781) Calling .Close
	I1002 21:11:35.594928  527794 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:11:35.594944  527794 main.go:141] libmachine: (test-preload-105781) DBG | Closing plugin on server side
	I1002 21:11:35.594948  527794 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:11:35.596482  527794 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:11:35.597350  527794 addons.go:514] duration metric: took 1.022967945s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:11:35.597383  527794 start.go:246] waiting for cluster config update ...
	I1002 21:11:35.597397  527794 start.go:255] writing updated cluster config ...
	I1002 21:11:35.597640  527794 ssh_runner.go:195] Run: rm -f paused
	I1002 21:11:35.603536  527794 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:11:35.604045  527794 kapi.go:59] client config for test-preload-105781: &rest.Config{Host:"https://192.168.39.138:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-492630/.minikube/profiles/test-preload-105781/client.key", CAFile:"/home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:11:35.606864  527794 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-zl8zp" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:11:37.612887  527794 pod_ready.go:104] pod "coredns-668d6bf9bc-zl8zp" is not "Ready", error: <nil>
	W1002 21:11:40.112490  527794 pod_ready.go:104] pod "coredns-668d6bf9bc-zl8zp" is not "Ready", error: <nil>
	I1002 21:11:41.116973  527794 pod_ready.go:94] pod "coredns-668d6bf9bc-zl8zp" is "Ready"
	I1002 21:11:41.117011  527794 pod_ready.go:86] duration metric: took 5.51012313s for pod "coredns-668d6bf9bc-zl8zp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:41.123461  527794 pod_ready.go:83] waiting for pod "etcd-test-preload-105781" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:41.630009  527794 pod_ready.go:94] pod "etcd-test-preload-105781" is "Ready"
	I1002 21:11:41.630048  527794 pod_ready.go:86] duration metric: took 506.562109ms for pod "etcd-test-preload-105781" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:41.632276  527794 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-105781" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:41.636206  527794 pod_ready.go:94] pod "kube-apiserver-test-preload-105781" is "Ready"
	I1002 21:11:41.636237  527794 pod_ready.go:86] duration metric: took 3.940217ms for pod "kube-apiserver-test-preload-105781" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:41.638578  527794 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-105781" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:11:43.644674  527794 pod_ready.go:104] pod "kube-controller-manager-test-preload-105781" is not "Ready", error: <nil>
	W1002 21:11:46.144222  527794 pod_ready.go:104] pod "kube-controller-manager-test-preload-105781" is not "Ready", error: <nil>
	W1002 21:11:48.144674  527794 pod_ready.go:104] pod "kube-controller-manager-test-preload-105781" is not "Ready", error: <nil>
	I1002 21:11:49.144359  527794 pod_ready.go:94] pod "kube-controller-manager-test-preload-105781" is "Ready"
	I1002 21:11:49.144389  527794 pod_ready.go:86] duration metric: took 7.505789982s for pod "kube-controller-manager-test-preload-105781" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:49.146521  527794 pod_ready.go:83] waiting for pod "kube-proxy-njwsk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:49.150490  527794 pod_ready.go:94] pod "kube-proxy-njwsk" is "Ready"
	I1002 21:11:49.150511  527794 pod_ready.go:86] duration metric: took 3.969933ms for pod "kube-proxy-njwsk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:49.152364  527794 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-105781" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:49.156198  527794 pod_ready.go:94] pod "kube-scheduler-test-preload-105781" is "Ready"
	I1002 21:11:49.156225  527794 pod_ready.go:86] duration metric: took 3.841179ms for pod "kube-scheduler-test-preload-105781" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:11:49.156236  527794 pod_ready.go:40] duration metric: took 13.552670751s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:11:49.199161  527794 start.go:623] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1002 21:11:49.200503  527794 out.go:203] 
	W1002 21:11:49.201301  527794 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1002 21:11:49.202097  527794 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1002 21:11:49.202894  527794 out.go:179] * Done! kubectl is now configured to use "test-preload-105781" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.094926959Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22cced8c-914b-4a44-935b-f9f8782894a3 name=/runtime.v1.RuntimeService/Version
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.096565948Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae3cf298-f6d7-47e3-9fac-3279c70ff36e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.097235592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759439510097171310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae3cf298-f6d7-47e3-9fac-3279c70ff36e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.098049688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4411a28-64d9-4c18-a91e-989ea418ea7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.098099206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4411a28-64d9-4c18-a91e-989ea418ea7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.098242876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a53fae28377ca2a25458fff8f0ff8ee95bfb80ae5e9d4232b7cb64d61ac0e00,PodSandboxId:046defe2d19e39d1747f4d319c5deaa6b3def1539846eaf26085bc7944ebe0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759439497132466378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zl8zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4282759a-512a-4c97-8733-8d7ba955fecd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e149ae3385f4372a3021859f610dfe9b0c5e475d9cbc913ed8051a3d3e1e36a8,PodSandboxId:852c3a65eea91f618cee4f42ab64a4131135f9c09ee872f8ca110fafb97d607f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759439493543331991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 89bb4083-ca27-40f4-8d74-ef5cbfc483b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d102b621108f40c18c754f912b2907a56932c6567d528812284af53aceb601c,PodSandboxId:15fb17b5a6b17d92f27f8a944009d3b5fe500fb8ec281ce000ce23754d630cd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759439493506776999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njwsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6
c64448-6037-4e83-b892-6e457d6d832f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5689134bb25b9196dce1d28e2bd04c48acc5f282933d396e57f4f63c26b81eee,PodSandboxId:894ec7cfb1b164bd722c31c195e6c8fa9b85ab9f5e684d836e4742a42f44c7c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759439490308662967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51bced87486a8c689f8a3b545312040e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7f4922fcd77dc4dd614498a3d8e95ebb60b0c3720261db9ff91cf89b454817,PodSandboxId:23ee4ad4369317aa985dd336988291bdb1a24d89e6af0edaeaf6d5608162c7f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759439490297246438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4bdf28bd9bb334304e51064a81f5530,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d0c938af47a0f28a3e453c826b7ebc76fb31460c8c1f0cad9545928c59a443,PodSandboxId:ba052d13f3210745468f56d3e35b644045f4edd23002d7b5c5cb60e6a2b7e1bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759439490290052846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa743396bff7e5d005485cea091039a2,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801426c4af827b189e07a91da0bfc717e098bbd8254406af2b2f5280fc793512,PodSandboxId:7f1b143c48a9c1e72a140b551350622d759d8e4de857bab0b426c2e0df4b15d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759439490281936712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968de8dfdb29e985e8f9375700545e26,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4411a28-64d9-4c18-a91e-989ea418ea7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.127837822Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37d97e77-6a6d-47f0-8031-366d0f007f43 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.128022920Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:046defe2d19e39d1747f4d319c5deaa6b3def1539846eaf26085bc7944ebe0c5,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-zl8zp,Uid:4282759a-512a-4c97-8733-8d7ba955fecd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759439496934420037,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-zl8zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4282759a-512a-4c97-8733-8d7ba955fecd,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-02T21:11:33.096062588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:15fb17b5a6b17d92f27f8a944009d3b5fe500fb8ec281ce000ce23754d630cd9,Metadata:&PodSandboxMetadata{Name:kube-proxy-njwsk,Uid:f6c64448-6037-4e83-b892-6e457d6d832f,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1759439493409508208,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-njwsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6c64448-6037-4e83-b892-6e457d6d832f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-02T21:11:33.096058746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:852c3a65eea91f618cee4f42ab64a4131135f9c09ee872f8ca110fafb97d607f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:89bb4083-ca27-40f4-8d74-ef5cbfc483b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759439493403949515,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb4083-ca27-40f4-8d74-ef5c
bfc483b6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-02T21:11:33.096061068Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:894ec7cfb1b164bd722c31c195e6c8fa9b85ab9f5e684d836e4742a42f44c7c7,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-105781,Uid:51bced87486a8c689
f8a3b545312040e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759439490094972756,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51bced87486a8c689f8a3b545312040e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.138:2379,kubernetes.io/config.hash: 51bced87486a8c689f8a3b545312040e,kubernetes.io/config.seen: 2025-10-02T21:11:28.161869451Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7f1b143c48a9c1e72a140b551350622d759d8e4de857bab0b426c2e0df4b15d9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-105781,Uid:968de8dfdb29e985e8f9375700545e26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759439490087938475,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-pr
eload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968de8dfdb29e985e8f9375700545e26,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 968de8dfdb29e985e8f9375700545e26,kubernetes.io/config.seen: 2025-10-02T21:11:28.094549224Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba052d13f3210745468f56d3e35b644045f4edd23002d7b5c5cb60e6a2b7e1bc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-105781,Uid:aa743396bff7e5d005485cea091039a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759439490083406261,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa743396bff7e5d005485cea091039a2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa743396bff7e5d005485cea091039a2,kubernetes.io/config.seen: 2025-10-02T21
:11:28.094547907Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:23ee4ad4369317aa985dd336988291bdb1a24d89e6af0edaeaf6d5608162c7f4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-105781,Uid:d4bdf28bd9bb334304e51064a81f5530,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759439490075200804,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4bdf28bd9bb334304e51064a81f5530,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.138:8443,kubernetes.io/config.hash: d4bdf28bd9bb334304e51064a81f5530,kubernetes.io/config.seen: 2025-10-02T21:11:28.094544896Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=37d97e77-6a6d-47f0-8031-366d0f007f43 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.128657520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf00a57e-1fdf-4a76-adf8-9f2eeb14451d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.128709943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf00a57e-1fdf-4a76-adf8-9f2eeb14451d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.128861630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a53fae28377ca2a25458fff8f0ff8ee95bfb80ae5e9d4232b7cb64d61ac0e00,PodSandboxId:046defe2d19e39d1747f4d319c5deaa6b3def1539846eaf26085bc7944ebe0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759439497132466378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zl8zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4282759a-512a-4c97-8733-8d7ba955fecd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e149ae3385f4372a3021859f610dfe9b0c5e475d9cbc913ed8051a3d3e1e36a8,PodSandboxId:852c3a65eea91f618cee4f42ab64a4131135f9c09ee872f8ca110fafb97d607f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759439493543331991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 89bb4083-ca27-40f4-8d74-ef5cbfc483b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d102b621108f40c18c754f912b2907a56932c6567d528812284af53aceb601c,PodSandboxId:15fb17b5a6b17d92f27f8a944009d3b5fe500fb8ec281ce000ce23754d630cd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759439493506776999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njwsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6
c64448-6037-4e83-b892-6e457d6d832f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5689134bb25b9196dce1d28e2bd04c48acc5f282933d396e57f4f63c26b81eee,PodSandboxId:894ec7cfb1b164bd722c31c195e6c8fa9b85ab9f5e684d836e4742a42f44c7c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759439490308662967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51bced87486a8c689f8a3b545312040e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7f4922fcd77dc4dd614498a3d8e95ebb60b0c3720261db9ff91cf89b454817,PodSandboxId:23ee4ad4369317aa985dd336988291bdb1a24d89e6af0edaeaf6d5608162c7f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759439490297246438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4bdf28bd9bb334304e51064a81f5530,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d0c938af47a0f28a3e453c826b7ebc76fb31460c8c1f0cad9545928c59a443,PodSandboxId:ba052d13f3210745468f56d3e35b644045f4edd23002d7b5c5cb60e6a2b7e1bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759439490290052846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa743396bff7e5d005485cea091039a2,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801426c4af827b189e07a91da0bfc717e098bbd8254406af2b2f5280fc793512,PodSandboxId:7f1b143c48a9c1e72a140b551350622d759d8e4de857bab0b426c2e0df4b15d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759439490281936712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968de8dfdb29e985e8f9375700545e26,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf00a57e-1fdf-4a76-adf8-9f2eeb14451d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.134246344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58c82652-9c62-4e7f-88e3-bdc2dc0095a8 name=/runtime.v1.RuntimeService/Version
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.134343053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58c82652-9c62-4e7f-88e3-bdc2dc0095a8 name=/runtime.v1.RuntimeService/Version
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.135578578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c185bd92-9ffe-409e-9451-75a86e8aab58 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.136096608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759439510136077636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c185bd92-9ffe-409e-9451-75a86e8aab58 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.136789506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32701e61-807b-4112-ba7c-3b122f1fb1f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.136881696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32701e61-807b-4112-ba7c-3b122f1fb1f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.137044762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a53fae28377ca2a25458fff8f0ff8ee95bfb80ae5e9d4232b7cb64d61ac0e00,PodSandboxId:046defe2d19e39d1747f4d319c5deaa6b3def1539846eaf26085bc7944ebe0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759439497132466378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zl8zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4282759a-512a-4c97-8733-8d7ba955fecd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e149ae3385f4372a3021859f610dfe9b0c5e475d9cbc913ed8051a3d3e1e36a8,PodSandboxId:852c3a65eea91f618cee4f42ab64a4131135f9c09ee872f8ca110fafb97d607f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759439493543331991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 89bb4083-ca27-40f4-8d74-ef5cbfc483b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d102b621108f40c18c754f912b2907a56932c6567d528812284af53aceb601c,PodSandboxId:15fb17b5a6b17d92f27f8a944009d3b5fe500fb8ec281ce000ce23754d630cd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759439493506776999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njwsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6
c64448-6037-4e83-b892-6e457d6d832f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5689134bb25b9196dce1d28e2bd04c48acc5f282933d396e57f4f63c26b81eee,PodSandboxId:894ec7cfb1b164bd722c31c195e6c8fa9b85ab9f5e684d836e4742a42f44c7c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759439490308662967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51bced87486a8c689f8a3b545312040e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7f4922fcd77dc4dd614498a3d8e95ebb60b0c3720261db9ff91cf89b454817,PodSandboxId:23ee4ad4369317aa985dd336988291bdb1a24d89e6af0edaeaf6d5608162c7f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759439490297246438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4bdf28bd9bb334304e51064a81f5530,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d0c938af47a0f28a3e453c826b7ebc76fb31460c8c1f0cad9545928c59a443,PodSandboxId:ba052d13f3210745468f56d3e35b644045f4edd23002d7b5c5cb60e6a2b7e1bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759439490290052846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa743396bff7e5d005485cea091039a2,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801426c4af827b189e07a91da0bfc717e098bbd8254406af2b2f5280fc793512,PodSandboxId:7f1b143c48a9c1e72a140b551350622d759d8e4de857bab0b426c2e0df4b15d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759439490281936712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968de8dfdb29e985e8f9375700545e26,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32701e61-807b-4112-ba7c-3b122f1fb1f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.168233110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47a40504-6687-496c-82d9-90183bd1af58 name=/runtime.v1.RuntimeService/Version
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.168308359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47a40504-6687-496c-82d9-90183bd1af58 name=/runtime.v1.RuntimeService/Version
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.169387552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81688318-82ce-4d8d-a0ad-38146b110ae1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.169869502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759439510169850671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81688318-82ce-4d8d-a0ad-38146b110ae1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.171175210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=465a2f3f-855a-40fb-bc92-8f1fcbc12892 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.171223877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=465a2f3f-855a-40fb-bc92-8f1fcbc12892 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:11:50 test-preload-105781 crio[830]: time="2025-10-02 21:11:50.171368330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a53fae28377ca2a25458fff8f0ff8ee95bfb80ae5e9d4232b7cb64d61ac0e00,PodSandboxId:046defe2d19e39d1747f4d319c5deaa6b3def1539846eaf26085bc7944ebe0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759439497132466378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zl8zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4282759a-512a-4c97-8733-8d7ba955fecd,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e149ae3385f4372a3021859f610dfe9b0c5e475d9cbc913ed8051a3d3e1e36a8,PodSandboxId:852c3a65eea91f618cee4f42ab64a4131135f9c09ee872f8ca110fafb97d607f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759439493543331991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 89bb4083-ca27-40f4-8d74-ef5cbfc483b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d102b621108f40c18c754f912b2907a56932c6567d528812284af53aceb601c,PodSandboxId:15fb17b5a6b17d92f27f8a944009d3b5fe500fb8ec281ce000ce23754d630cd9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759439493506776999,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njwsk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6
c64448-6037-4e83-b892-6e457d6d832f,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5689134bb25b9196dce1d28e2bd04c48acc5f282933d396e57f4f63c26b81eee,PodSandboxId:894ec7cfb1b164bd722c31c195e6c8fa9b85ab9f5e684d836e4742a42f44c7c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759439490308662967,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51bced87486a8c689f8a3b545312040e,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7f4922fcd77dc4dd614498a3d8e95ebb60b0c3720261db9ff91cf89b454817,PodSandboxId:23ee4ad4369317aa985dd336988291bdb1a24d89e6af0edaeaf6d5608162c7f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759439490297246438,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4bdf28bd9bb334304e51064a81f5530,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7d0c938af47a0f28a3e453c826b7ebc76fb31460c8c1f0cad9545928c59a443,PodSandboxId:ba052d13f3210745468f56d3e35b644045f4edd23002d7b5c5cb60e6a2b7e1bc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759439490290052846,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa743396bff7e5d005485cea091039a2,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:801426c4af827b189e07a91da0bfc717e098bbd8254406af2b2f5280fc793512,PodSandboxId:7f1b143c48a9c1e72a140b551350622d759d8e4de857bab0b426c2e0df4b15d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759439490281936712,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-105781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968de8dfdb29e985e8f9375700545e26,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=465a2f3f-855a-40fb-bc92-8f1fcbc12892 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9a53fae28377c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Running             coredns                   1                   046defe2d19e3       coredns-668d6bf9bc-zl8zp
	e149ae3385f43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   852c3a65eea91       storage-provisioner
	3d102b621108f       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   15fb17b5a6b17       kube-proxy-njwsk
	5689134bb25b9       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   894ec7cfb1b16       etcd-test-preload-105781
	1d7f4922fcd77       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   23ee4ad436931       kube-apiserver-test-preload-105781
	a7d0c938af47a       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   ba052d13f3210       kube-controller-manager-test-preload-105781
	801426c4af827       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   7f1b143c48a9c       kube-scheduler-test-preload-105781
	
	
	==> coredns [9a53fae28377ca2a25458fff8f0ff8ee95bfb80ae5e9d4232b7cb64d61ac0e00] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37789 - 23112 "HINFO IN 2594837311673503035.6576948173604573677. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035950292s
	
	
	==> describe nodes <==
	Name:               test-preload-105781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-105781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=test-preload-105781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_10_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:10:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-105781
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:11:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:11:34 +0000   Thu, 02 Oct 2025 21:09:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:11:34 +0000   Thu, 02 Oct 2025 21:09:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:11:34 +0000   Thu, 02 Oct 2025 21:09:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:11:34 +0000   Thu, 02 Oct 2025 21:11:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.138
	  Hostname:    test-preload-105781
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e2be3e96f734ffb84c170c8fcd313b8
	  System UUID:                7e2be3e9-6f73-4ffb-84c1-70c8fcd313b8
	  Boot ID:                    6bb0d118-cfc6-4668-b2c6-489e166bc959
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-zl8zp                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     103s
	  kube-system                 etcd-test-preload-105781                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         109s
	  kube-system                 kube-apiserver-test-preload-105781             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-test-preload-105781    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-njwsk                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-test-preload-105781             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 101s                 kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Normal   Starting                 113s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node test-preload-105781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node test-preload-105781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s (x7 over 113s)  kubelet          Node test-preload-105781 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    108s                 kubelet          Node test-preload-105781 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  108s                 kubelet          Node test-preload-105781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     108s                 kubelet          Node test-preload-105781 status is now: NodeHasSufficientPID
	  Normal   Starting                 108s                 kubelet          Starting kubelet.
	  Normal   NodeReady                107s                 kubelet          Node test-preload-105781 status is now: NodeReady
	  Normal   RegisteredNode           104s                 node-controller  Node test-preload-105781 event: Registered Node test-preload-105781 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-105781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-105781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-105781 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                  kubelet          Node test-preload-105781 has been rebooted, boot id: 6bb0d118-cfc6-4668-b2c6-489e166bc959
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-105781 event: Registered Node test-preload-105781 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:11] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000032] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006609] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.003038] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.080408] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.091395] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.455678] kauditd_printk_skb: 177 callbacks suppressed
	[  +4.058536] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [5689134bb25b9196dce1d28e2bd04c48acc5f282933d396e57f4f63c26b81eee] <==
	{"level":"info","ts":"2025-10-02T21:11:30.750539Z","caller":"etcdserver/server.go:757","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"fdd267ffc1b7c75a","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-02T21:11:30.750776Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:11:30.767875Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:11:30.752981Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T21:11:30.753129Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2025-10-02T21:11:30.768031Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.138:2380"}
	{"level":"info","ts":"2025-10-02T21:11:30.767911Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T21:11:30.768188Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"fdd267ffc1b7c75a","initial-advertise-peer-urls":["https://192.168.39.138:2380"],"listen-peer-urls":["https://192.168.39.138:2380"],"advertise-client-urls":["https://192.168.39.138:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.138:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T21:11:30.768257Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T21:11:31.713293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T21:11:31.713332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T21:11:31.713390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a received MsgPreVoteResp from fdd267ffc1b7c75a at term 2"}
	{"level":"info","ts":"2025-10-02T21:11:31.713416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T21:11:31.713422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a received MsgVoteResp from fdd267ffc1b7c75a at term 3"}
	{"level":"info","ts":"2025-10-02T21:11:31.713430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fdd267ffc1b7c75a became leader at term 3"}
	{"level":"info","ts":"2025-10-02T21:11:31.713437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fdd267ffc1b7c75a elected leader fdd267ffc1b7c75a at term 3"}
	{"level":"info","ts":"2025-10-02T21:11:31.715624Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"fdd267ffc1b7c75a","local-member-attributes":"{Name:test-preload-105781 ClientURLs:[https://192.168.39.138:2379]}","request-path":"/0/members/fdd267ffc1b7c75a/attributes","cluster-id":"63b27a6ce7f4c58a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T21:11:31.715632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:11:31.715832Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T21:11:31.716073Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T21:11:31.716098Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-02T21:11:31.716459Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-02T21:11:31.716624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-02T21:11:31.717546Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.138:2379"}
	{"level":"info","ts":"2025-10-02T21:11:31.718062Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:11:50 up 0 min,  0 users,  load average: 0.38, 0.11, 0.04
	Linux test-preload-105781 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1d7f4922fcd77dc4dd614498a3d8e95ebb60b0c3720261db9ff91cf89b454817] <==
	I1002 21:11:32.847261       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:11:32.858198       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1002 21:11:32.862230       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:11:32.862345       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1002 21:11:32.862360       1 shared_informer.go:320] Caches are synced for configmaps
	I1002 21:11:32.862445       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:11:32.862842       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:11:32.877971       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1002 21:11:32.878032       1 aggregator.go:171] initial CRD sync complete...
	I1002 21:11:32.878039       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 21:11:32.878044       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:11:32.878048       1 cache.go:39] Caches are synced for autoregister controller
	E1002 21:11:32.881453       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:11:32.915230       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1002 21:11:32.915257       1 policy_source.go:240] refreshing policies
	I1002 21:11:32.974003       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:11:33.187528       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1002 21:11:33.752019       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:11:34.350486       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1002 21:11:34.378714       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1002 21:11:34.404741       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:11:34.411062       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:11:36.347155       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:11:36.396750       1 controller.go:615] quota admission added evaluator for: endpoints
	I1002 21:11:36.448344       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a7d0c938af47a0f28a3e453c826b7ebc76fb31460c8c1f0cad9545928c59a443] <==
	I1002 21:11:36.044519       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1002 21:11:36.045706       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1002 21:11:36.045749       1 shared_informer.go:320] Caches are synced for daemon sets
	I1002 21:11:36.045817       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1002 21:11:36.047241       1 shared_informer.go:320] Caches are synced for TTL
	I1002 21:11:36.049548       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1002 21:11:36.050708       1 shared_informer.go:320] Caches are synced for resource quota
	I1002 21:11:36.052936       1 shared_informer.go:320] Caches are synced for GC
	I1002 21:11:36.054118       1 shared_informer.go:320] Caches are synced for resource quota
	I1002 21:11:36.055263       1 shared_informer.go:320] Caches are synced for taint
	I1002 21:11:36.055359       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:11:36.055442       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-105781"
	I1002 21:11:36.055470       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 21:11:36.058463       1 shared_informer.go:320] Caches are synced for PVC protection
	I1002 21:11:36.060289       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1002 21:11:36.063638       1 shared_informer.go:320] Caches are synced for ephemeral
	I1002 21:11:36.064816       1 shared_informer.go:320] Caches are synced for garbage collector
	I1002 21:11:36.064833       1 shared_informer.go:320] Caches are synced for job
	I1002 21:11:36.070527       1 shared_informer.go:320] Caches are synced for disruption
	I1002 21:11:36.088781       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1002 21:11:36.454107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="408.26271ms"
	I1002 21:11:36.454199       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.805µs"
	I1002 21:11:37.226431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="43.97µs"
	I1002 21:11:41.034019       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="9.687155ms"
	I1002 21:11:41.035373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.078247ms"
	
	
	==> kube-proxy [3d102b621108f40c18c754f912b2907a56932c6567d528812284af53aceb601c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1002 21:11:33.716718       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1002 21:11:33.727026       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.138"]
	E1002 21:11:33.727090       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:11:33.776556       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1002 21:11:33.776646       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 21:11:33.776672       1 server_linux.go:170] "Using iptables Proxier"
	I1002 21:11:33.780043       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:11:33.780523       1 server.go:497] "Version info" version="v1.32.0"
	I1002 21:11:33.780654       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:11:33.790717       1 config.go:199] "Starting service config controller"
	I1002 21:11:33.791577       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 21:11:33.791020       1 config.go:105] "Starting endpoint slice config controller"
	I1002 21:11:33.791697       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 21:11:33.791382       1 config.go:329] "Starting node config controller"
	I1002 21:11:33.791727       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 21:11:33.892382       1 shared_informer.go:320] Caches are synced for service config
	I1002 21:11:33.892549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1002 21:11:33.892987       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [801426c4af827b189e07a91da0bfc717e098bbd8254406af2b2f5280fc793512] <==
	I1002 21:11:31.344924       1 serving.go:386] Generated self-signed cert in-memory
	W1002 21:11:32.798559       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 21:11:32.798680       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 21:11:32.798703       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 21:11:32.798727       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 21:11:32.877263       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1002 21:11:32.878665       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:11:32.882148       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:11:32.882186       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 21:11:32.883014       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1002 21:11:32.883071       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:11:32.982634       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: E1002 21:11:33.022831    1148 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-105781\" already exists" pod="kube-system/kube-apiserver-test-preload-105781"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: I1002 21:11:33.022858    1148 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-105781"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: E1002 21:11:33.041031    1148 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-105781\" already exists" pod="kube-system/kube-controller-manager-test-preload-105781"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: I1002 21:11:33.041072    1148 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-105781"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: E1002 21:11:33.053935    1148 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-105781\" already exists" pod="kube-system/kube-scheduler-test-preload-105781"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: I1002 21:11:33.053981    1148 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-105781"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: E1002 21:11:33.065244    1148 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-105781\" already exists" pod="kube-system/etcd-test-preload-105781"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: I1002 21:11:33.092538    1148 apiserver.go:52] "Watching apiserver"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: E1002 21:11:33.098096    1148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-zl8zp" podUID="4282759a-512a-4c97-8733-8d7ba955fecd"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: I1002 21:11:33.102015    1148 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: I1002 21:11:33.180443    1148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6c64448-6037-4e83-b892-6e457d6d832f-xtables-lock\") pod \"kube-proxy-njwsk\" (UID: \"f6c64448-6037-4e83-b892-6e457d6d832f\") " pod="kube-system/kube-proxy-njwsk"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: I1002 21:11:33.180614    1148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/89bb4083-ca27-40f4-8d74-ef5cbfc483b6-tmp\") pod \"storage-provisioner\" (UID: \"89bb4083-ca27-40f4-8d74-ef5cbfc483b6\") " pod="kube-system/storage-provisioner"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: I1002 21:11:33.180697    1148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6c64448-6037-4e83-b892-6e457d6d832f-lib-modules\") pod \"kube-proxy-njwsk\" (UID: \"f6c64448-6037-4e83-b892-6e457d6d832f\") " pod="kube-system/kube-proxy-njwsk"
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: E1002 21:11:33.180826    1148 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: E1002 21:11:33.181323    1148 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4282759a-512a-4c97-8733-8d7ba955fecd-config-volume podName:4282759a-512a-4c97-8733-8d7ba955fecd nodeName:}" failed. No retries permitted until 2025-10-02 21:11:33.681189724 +0000 UTC m=+5.674615838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4282759a-512a-4c97-8733-8d7ba955fecd-config-volume") pod "coredns-668d6bf9bc-zl8zp" (UID: "4282759a-512a-4c97-8733-8d7ba955fecd") : object "kube-system"/"coredns" not registered
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: E1002 21:11:33.684336    1148 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 21:11:33 test-preload-105781 kubelet[1148]: E1002 21:11:33.684419    1148 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4282759a-512a-4c97-8733-8d7ba955fecd-config-volume podName:4282759a-512a-4c97-8733-8d7ba955fecd nodeName:}" failed. No retries permitted until 2025-10-02 21:11:34.684403912 +0000 UTC m=+6.677830022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4282759a-512a-4c97-8733-8d7ba955fecd-config-volume") pod "coredns-668d6bf9bc-zl8zp" (UID: "4282759a-512a-4c97-8733-8d7ba955fecd") : object "kube-system"/"coredns" not registered
	Oct 02 21:11:34 test-preload-105781 kubelet[1148]: I1002 21:11:34.673868    1148 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 02 21:11:34 test-preload-105781 kubelet[1148]: E1002 21:11:34.691324    1148 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 21:11:34 test-preload-105781 kubelet[1148]: E1002 21:11:34.691406    1148 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4282759a-512a-4c97-8733-8d7ba955fecd-config-volume podName:4282759a-512a-4c97-8733-8d7ba955fecd nodeName:}" failed. No retries permitted until 2025-10-02 21:11:36.691393233 +0000 UTC m=+8.684819333 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4282759a-512a-4c97-8733-8d7ba955fecd-config-volume") pod "coredns-668d6bf9bc-zl8zp" (UID: "4282759a-512a-4c97-8733-8d7ba955fecd") : object "kube-system"/"coredns" not registered
	Oct 02 21:11:38 test-preload-105781 kubelet[1148]: E1002 21:11:38.163155    1148 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759439498162910108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 21:11:38 test-preload-105781 kubelet[1148]: E1002 21:11:38.163176    1148 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759439498162910108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 21:11:41 test-preload-105781 kubelet[1148]: I1002 21:11:41.009998    1148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 21:11:48 test-preload-105781 kubelet[1148]: E1002 21:11:48.164327    1148 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759439508164087542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 21:11:48 test-preload-105781 kubelet[1148]: E1002 21:11:48.164348    1148 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759439508164087542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e149ae3385f4372a3021859f610dfe9b0c5e475d9cbc913ed8051a3d3e1e36a8] <==
	I1002 21:11:33.648714       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-105781 -n test-preload-105781
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-105781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-105781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-105781
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-105781: (1.032316436s)
--- FAIL: TestPreload (159.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (75.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-128856 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-128856 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m10.642666538s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-128856] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-128856" primary control-plane node in "pause-128856" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-128856" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:18:46.493027  537164 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:18:46.493313  537164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:18:46.493324  537164 out.go:374] Setting ErrFile to fd 2...
	I1002 21:18:46.493331  537164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:18:46.493539  537164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 21:18:46.494050  537164 out.go:368] Setting JSON to false
	I1002 21:18:46.495235  537164 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7261,"bootTime":1759432665,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:18:46.495333  537164 start.go:140] virtualization: kvm guest
	I1002 21:18:46.496889  537164 out.go:179] * [pause-128856] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:18:46.497931  537164 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:18:46.497951  537164 notify.go:220] Checking for updates...
	I1002 21:18:46.500115  537164 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:18:46.501045  537164 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 21:18:46.501880  537164 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 21:18:46.502673  537164 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:18:46.503588  537164 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:18:46.504829  537164 config.go:182] Loaded profile config "pause-128856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:18:46.505270  537164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:18:46.505329  537164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:18:46.520041  537164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I1002 21:18:46.520440  537164 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:18:46.521007  537164 main.go:141] libmachine: Using API Version  1
	I1002 21:18:46.521029  537164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:18:46.521445  537164 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:18:46.521647  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:18:46.521931  537164 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:18:46.522236  537164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:18:46.522309  537164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:18:46.535859  537164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43177
	I1002 21:18:46.536316  537164 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:18:46.536842  537164 main.go:141] libmachine: Using API Version  1
	I1002 21:18:46.536870  537164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:18:46.537282  537164 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:18:46.537497  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:18:46.572425  537164 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 21:18:46.573426  537164 start.go:304] selected driver: kvm2
	I1002 21:18:46.573447  537164 start.go:924] validating driver "kvm2" against &{Name:pause-128856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-128856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:18:46.573730  537164 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:18:46.574223  537164 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:18:46.574343  537164 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 21:18:46.589320  537164 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 21:18:46.589364  537164 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 21:18:46.604119  537164 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 21:18:46.604913  537164 cni.go:84] Creating CNI manager for ""
	I1002 21:18:46.604968  537164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:18:46.605022  537164 start.go:348] cluster config:
	{Name:pause-128856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-128856 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:18:46.605148  537164 iso.go:125] acquiring lock: {Name:mk7586bb79dc7f44da54ee16895643204aac50ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:18:46.607287  537164 out.go:179] * Starting "pause-128856" primary control-plane node in "pause-128856" cluster
	I1002 21:18:46.608277  537164 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:18:46.608322  537164 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:18:46.608332  537164 cache.go:58] Caching tarball of preloaded images
	I1002 21:18:46.608446  537164 preload.go:233] Found /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:18:46.608459  537164 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:18:46.608634  537164 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/config.json ...
	I1002 21:18:46.608921  537164 start.go:360] acquireMachinesLock for pause-128856: {Name:mk9e7957cdce1fd4b26ce430105927ec465bcae0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 21:19:16.666030  537164 start.go:364] duration metric: took 30.057071179s to acquireMachinesLock for "pause-128856"
	I1002 21:19:16.666081  537164 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:19:16.666092  537164 fix.go:54] fixHost starting: 
	I1002 21:19:16.666539  537164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:19:16.666594  537164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:19:16.685390  537164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I1002 21:19:16.685864  537164 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:19:16.686382  537164 main.go:141] libmachine: Using API Version  1
	I1002 21:19:16.686412  537164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:19:16.686916  537164 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:19:16.687196  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:16.687380  537164 main.go:141] libmachine: (pause-128856) Calling .GetState
	I1002 21:19:16.689234  537164 fix.go:112] recreateIfNeeded on pause-128856: state=Running err=<nil>
	W1002 21:19:16.689275  537164 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:19:16.692107  537164 out.go:252] * Updating the running kvm2 "pause-128856" VM ...
	I1002 21:19:16.692149  537164 machine.go:93] provisionDockerMachine start ...
	I1002 21:19:16.692170  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:16.692374  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:16.695246  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.695678  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:16.695721  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.695925  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:16.696080  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.696249  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.696368  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:16.696555  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:16.696836  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:16.696848  537164 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:19:16.812516  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-128856
	
	I1002 21:19:16.812542  537164 main.go:141] libmachine: (pause-128856) Calling .GetMachineName
	I1002 21:19:16.812963  537164 buildroot.go:166] provisioning hostname "pause-128856"
	I1002 21:19:16.812996  537164 main.go:141] libmachine: (pause-128856) Calling .GetMachineName
	I1002 21:19:16.813181  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:16.816641  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.817077  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:16.817117  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.817304  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:16.817539  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.817723  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.817899  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:16.818100  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:16.818333  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:16.818345  537164 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-128856 && echo "pause-128856" | sudo tee /etc/hostname
	I1002 21:19:16.956194  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-128856
	
	I1002 21:19:16.956241  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:16.959774  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.960211  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:16.960239  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.960483  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:16.960728  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.960913  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.961068  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:16.961261  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:16.961539  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:16.961558  537164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-128856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-128856/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-128856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:19:17.085504  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:19:17.085536  537164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21682-492630/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-492630/.minikube}
	I1002 21:19:17.085574  537164 buildroot.go:174] setting up certificates
	I1002 21:19:17.085586  537164 provision.go:84] configureAuth start
	I1002 21:19:17.085602  537164 main.go:141] libmachine: (pause-128856) Calling .GetMachineName
	I1002 21:19:17.085938  537164 main.go:141] libmachine: (pause-128856) Calling .GetIP
	I1002 21:19:17.089122  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.089647  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:17.089673  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.089894  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:17.092778  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.093295  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:17.093317  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.093522  537164 provision.go:143] copyHostCerts
	I1002 21:19:17.093586  537164 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem, removing ...
	I1002 21:19:17.093612  537164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem
	I1002 21:19:17.093676  537164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem (1078 bytes)
	I1002 21:19:17.093837  537164 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem, removing ...
	I1002 21:19:17.093850  537164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem
	I1002 21:19:17.093888  537164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem (1123 bytes)
	I1002 21:19:17.093989  537164 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem, removing ...
	I1002 21:19:17.094001  537164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem
	I1002 21:19:17.094051  537164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem (1675 bytes)
	I1002 21:19:17.094142  537164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem org=jenkins.pause-128856 san=[127.0.0.1 192.168.39.39 localhost minikube pause-128856]
	I1002 21:19:17.204083  537164 provision.go:177] copyRemoteCerts
	I1002 21:19:17.204145  537164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:19:17.204177  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:17.207371  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.207778  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:17.207813  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.208091  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:17.208315  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:17.208504  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:17.208688  537164 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/pause-128856/id_rsa Username:docker}
	I1002 21:19:17.306236  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:19:17.340938  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 21:19:17.373580  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:19:17.404515  537164 provision.go:87] duration metric: took 318.909798ms to configureAuth
	I1002 21:19:17.404552  537164 buildroot.go:189] setting minikube options for container-runtime
	I1002 21:19:17.404854  537164 config.go:182] Loaded profile config "pause-128856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:19:17.404947  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:17.408220  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.408619  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:17.408671  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.408873  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:17.409089  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:17.409261  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:17.409388  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:17.409565  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:17.409860  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:17.409878  537164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:19:22.968991  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:19:22.969028  537164 machine.go:96] duration metric: took 6.276864804s to provisionDockerMachine
	I1002 21:19:22.969043  537164 start.go:293] postStartSetup for "pause-128856" (driver="kvm2")
	I1002 21:19:22.969056  537164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:19:22.969081  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:22.969508  537164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:19:22.969550  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:22.973346  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:22.973815  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:22.973846  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:22.974105  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:22.974292  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:22.974483  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:22.974646  537164 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/pause-128856/id_rsa Username:docker}
	I1002 21:19:23.069770  537164 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:19:23.075977  537164 info.go:137] Remote host: Buildroot 2025.02
	I1002 21:19:23.076010  537164 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/addons for local assets ...
	I1002 21:19:23.076082  537164 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/files for local assets ...
	I1002 21:19:23.076159  537164 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem -> 4975692.pem in /etc/ssl/certs
	I1002 21:19:23.076247  537164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:19:23.088678  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem --> /etc/ssl/certs/4975692.pem (1708 bytes)
	I1002 21:19:23.124617  537164 start.go:296] duration metric: took 155.553551ms for postStartSetup
	I1002 21:19:23.124671  537164 fix.go:56] duration metric: took 6.458578183s for fixHost
	I1002 21:19:23.124700  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:23.128630  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.129116  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:23.129148  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.129436  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:23.129721  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:23.129955  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:23.130158  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:23.130381  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:23.130724  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:23.130740  537164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 21:19:23.251349  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759439963.248315388
	
	I1002 21:19:23.251379  537164 fix.go:216] guest clock: 1759439963.248315388
	I1002 21:19:23.251390  537164 fix.go:229] Guest: 2025-10-02 21:19:23.248315388 +0000 UTC Remote: 2025-10-02 21:19:23.124676817 +0000 UTC m=+36.674789897 (delta=123.638571ms)
	I1002 21:19:23.251457  537164 fix.go:200] guest clock delta is within tolerance: 123.638571ms
	I1002 21:19:23.251464  537164 start.go:83] releasing machines lock for "pause-128856", held for 6.585405527s
	I1002 21:19:23.251498  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:23.251819  537164 main.go:141] libmachine: (pause-128856) Calling .GetIP
	I1002 21:19:23.256149  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.256653  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:23.256690  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.257139  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:23.257847  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:23.258041  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:23.258142  537164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:19:23.258210  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:23.258289  537164 ssh_runner.go:195] Run: cat /version.json
	I1002 21:19:23.258301  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:23.262923  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.263389  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:23.263446  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.263791  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.263940  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:23.264146  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:23.264353  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:23.264517  537164 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/pause-128856/id_rsa Username:docker}
	I1002 21:19:23.265476  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:23.265500  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.265818  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:23.266074  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:23.266290  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:23.266483  537164 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/pause-128856/id_rsa Username:docker}
	I1002 21:19:23.380477  537164 ssh_runner.go:195] Run: systemctl --version
	I1002 21:19:23.389280  537164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:19:23.546962  537164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:19:23.561043  537164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:19:23.561143  537164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:19:23.577463  537164 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:19:23.577504  537164 start.go:495] detecting cgroup driver to use...
	I1002 21:19:23.577587  537164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:19:23.605026  537164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:19:23.625892  537164 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:19:23.625974  537164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:19:23.648584  537164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:19:23.666721  537164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:19:23.899367  537164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:19:24.104366  537164 docker.go:234] disabling docker service ...
	I1002 21:19:24.104449  537164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:19:24.136729  537164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:19:24.155569  537164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:19:24.374974  537164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:19:24.590423  537164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:19:24.623794  537164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:19:24.655402  537164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:19:24.655495  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.669425  537164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:19:24.669525  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.685047  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.701192  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.720671  537164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:19:24.749912  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.765535  537164 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.778968  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.794533  537164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:19:24.806679  537164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:19:24.817876  537164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:19:25.084347  537164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:19:32.451786  537164 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.367383323s)
	I1002 21:19:32.451827  537164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:19:32.451890  537164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:19:32.459083  537164 start.go:563] Will wait 60s for crictl version
	I1002 21:19:32.459159  537164 ssh_runner.go:195] Run: which crictl
	I1002 21:19:32.463509  537164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 21:19:32.500119  537164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 21:19:32.500215  537164 ssh_runner.go:195] Run: crio --version
	I1002 21:19:32.531499  537164 ssh_runner.go:195] Run: crio --version
	I1002 21:19:32.563525  537164 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 21:19:32.564421  537164 main.go:141] libmachine: (pause-128856) Calling .GetIP
	I1002 21:19:32.567331  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:32.567864  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:32.567896  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:32.568212  537164 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 21:19:32.573154  537164 kubeadm.go:883] updating cluster {Name:pause-128856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-128856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:19:32.573385  537164 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:19:32.573457  537164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:19:32.617677  537164 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:19:32.617725  537164 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:19:32.617801  537164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:19:32.654217  537164 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:19:32.654249  537164 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:19:32.654259  537164 kubeadm.go:934] updating node { 192.168.39.39 8443 v1.34.1 crio true true} ...
	I1002 21:19:32.654391  537164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-128856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-128856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:19:32.654530  537164 ssh_runner.go:195] Run: crio config
	I1002 21:19:32.704437  537164 cni.go:84] Creating CNI manager for ""
	I1002 21:19:32.704471  537164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:19:32.704493  537164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:19:32.704523  537164 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-128856 NodeName:pause-128856 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:19:32.704693  537164 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-128856"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.39"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:19:32.704794  537164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:19:32.719385  537164 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:19:32.719478  537164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:19:32.732745  537164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1002 21:19:32.753827  537164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:19:32.776821  537164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1002 21:19:32.802949  537164 ssh_runner.go:195] Run: grep 192.168.39.39	control-plane.minikube.internal$ /etc/hosts
	I1002 21:19:32.808080  537164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:19:33.025419  537164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:19:33.052792  537164 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856 for IP: 192.168.39.39
	I1002 21:19:33.052822  537164 certs.go:195] generating shared ca certs ...
	I1002 21:19:33.052853  537164 certs.go:227] acquiring lock for ca certs: {Name:mk99bb18e623cf4cf4a4efda3dab88668aa481a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:33.053073  537164 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key
	I1002 21:19:33.053136  537164 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key
	I1002 21:19:33.053148  537164 certs.go:257] generating profile certs ...
	I1002 21:19:33.053289  537164 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/client.key
	I1002 21:19:33.053374  537164 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/apiserver.key.33b8e485
	I1002 21:19:33.053438  537164 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/proxy-client.key
	I1002 21:19:33.053555  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569.pem (1338 bytes)
	W1002 21:19:33.053582  537164 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569_empty.pem, impossibly tiny 0 bytes
	I1002 21:19:33.053590  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:19:33.053666  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:19:33.053718  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:19:33.053754  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem (1675 bytes)
	I1002 21:19:33.053813  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem (1708 bytes)
	I1002 21:19:33.054904  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:19:33.086355  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:19:33.119310  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:19:33.156405  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:19:33.190404  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 21:19:33.226635  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:19:33.267389  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:19:33.303795  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:19:33.348342  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem --> /usr/share/ca-certificates/4975692.pem (1708 bytes)
	I1002 21:19:33.403584  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:19:33.452816  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569.pem --> /usr/share/ca-certificates/497569.pem (1338 bytes)
	I1002 21:19:33.492692  537164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:19:33.524393  537164 ssh_runner.go:195] Run: openssl version
	I1002 21:19:33.533803  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4975692.pem && ln -fs /usr/share/ca-certificates/4975692.pem /etc/ssl/certs/4975692.pem"
	I1002 21:19:33.554059  537164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4975692.pem
	I1002 21:19:33.562341  537164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:27 /usr/share/ca-certificates/4975692.pem
	I1002 21:19:33.562459  537164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4975692.pem
	I1002 21:19:33.572092  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4975692.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:19:33.588683  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:19:33.605965  537164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:19:33.613256  537164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:19:33.613339  537164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:19:33.623263  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:19:33.638525  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497569.pem && ln -fs /usr/share/ca-certificates/497569.pem /etc/ssl/certs/497569.pem"
	I1002 21:19:33.655792  537164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497569.pem
	I1002 21:19:33.662768  537164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:27 /usr/share/ca-certificates/497569.pem
	I1002 21:19:33.662833  537164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497569.pem
	I1002 21:19:33.671727  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497569.pem /etc/ssl/certs/51391683.0"
	I1002 21:19:33.683405  537164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:19:33.689075  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:19:33.699191  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:19:33.708429  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:19:33.715982  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:19:33.725718  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:19:33.732883  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:19:33.742863  537164 kubeadm.go:400] StartCluster: {Name:pause-128856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-128856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:19:33.743025  537164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:19:33.743110  537164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:19:33.794920  537164 cri.go:89] found id: "0571b181224eed714f4f038aec0278668547fcc7a7a6bf02fd4fdbed33a4efde"
	I1002 21:19:33.794946  537164 cri.go:89] found id: "9d31f80733a3cc5ddd857f757c41cd2aa67b47084e1f991bfa8ea3b998fc0799"
	I1002 21:19:33.794953  537164 cri.go:89] found id: "11c83cdfc01723ef7e45b3510f1e200c5a4ab1167826f9a1c2fbc3b463993059"
	I1002 21:19:33.794957  537164 cri.go:89] found id: "4f4612b1df9269b91b676f8fbba243c1bbedff79a13ef12670c064228daf6327"
	I1002 21:19:33.794961  537164 cri.go:89] found id: "05387411e6ed3c96e79e0122ad74634891c4e42e18758ffb12ade2efa81ea15d"
	I1002 21:19:33.794966  537164 cri.go:89] found id: "11567d5c6ef86dfb46f79bbc6ffabddf97b216eb14a9f66cf90db5331ce637ed"
	I1002 21:19:33.794984  537164 cri.go:89] found id: ""
	I1002 21:19:33.795034  537164 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-128856 -n pause-128856
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-128856 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-128856 logs -n 25: (1.598822009s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────
────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                  ARGS                                                                                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────
────────┼─────────────────────┤
	│ ssh     │ cert-options-664739 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                                             │ cert-options-664739       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ ssh     │ -p cert-options-664739 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                                           │ cert-options-664739       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ delete  │ -p cert-options-664739                                                                                                                                                                                                                                                  │ cert-options-664739       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ ssh     │ -p NoKubernetes-685644 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                                                 │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │                     │
	│ stop    │ -p NoKubernetes-685644                                                                                                                                                                                                                                                  │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ start   │ -p NoKubernetes-685644 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                              │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:17 UTC │
	│ start   │ -p stopped-upgrade-391687 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                          │ stopped-upgrade-391687    │ jenkins │ v1.32.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:17 UTC │
	│ ssh     │ -p NoKubernetes-685644 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                                                 │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │                     │
	│ delete  │ -p NoKubernetes-685644                                                                                                                                                                                                                                                  │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:17 UTC │
	│ stop    │ -p kubernetes-upgrade-238376                                                                                                                                                                                                                                            │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:17 UTC │
	│ start   │ -p pause-128856 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ pause-128856              │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:18 UTC │
	│ start   │ -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                      │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:18 UTC │
	│ stop    │ stopped-upgrade-391687 stop                                                                                                                                                                                                                                             │ stopped-upgrade-391687    │ jenkins │ v1.32.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:17 UTC │
	│ start   │ -p stopped-upgrade-391687 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                      │ stopped-upgrade-391687    │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:18 UTC │
	│ start   │ -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                             │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │                     │
	│ start   │ -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                      │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:18 UTC │
	│ delete  │ -p kubernetes-upgrade-238376                                                                                                                                                                                                                                            │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:18 UTC │
	│ start   │ -p old-k8s-version-166937 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0 │ old-k8s-version-166937    │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:19 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-391687 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                                             │ stopped-upgrade-391687    │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │                     │
	│ delete  │ -p stopped-upgrade-391687                                                                                                                                                                                                                                               │ stopped-upgrade-391687    │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-397715 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                                       │ no-preload-397715         │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │                     │
	│ start   │ -p pause-128856 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                              │ pause-128856              │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:19 UTC │
	│ start   │ -p cert-expiration-852898 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ cert-expiration-852898    │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-166937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                            │ old-k8s-version-166937    │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │ 02 Oct 25 21:19 UTC │
	│ stop    │ -p old-k8s-version-166937 --alsologtostderr -v=3                                                                                                                                                                                                                        │ old-k8s-version-166937    │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────
────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:18:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:18:58.468031  537395 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:18:58.468335  537395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:18:58.468340  537395 out.go:374] Setting ErrFile to fd 2...
	I1002 21:18:58.468345  537395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:18:58.468628  537395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 21:18:58.469254  537395 out.go:368] Setting JSON to false
	I1002 21:18:58.470559  537395 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7273,"bootTime":1759432665,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:18:58.470674  537395 start.go:140] virtualization: kvm guest
	I1002 21:18:58.472751  537395 out.go:179] * [cert-expiration-852898] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:18:58.474068  537395 notify.go:220] Checking for updates...
	I1002 21:18:58.474371  537395 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:18:58.475347  537395 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:18:58.476368  537395 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 21:18:58.477356  537395 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 21:18:58.478461  537395 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:18:58.479699  537395 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:18:58.481482  537395 config.go:182] Loaded profile config "cert-expiration-852898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:18:58.482206  537395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:18:58.482276  537395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:18:58.502925  537395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I1002 21:18:58.503576  537395 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:18:58.504193  537395 main.go:141] libmachine: Using API Version  1
	I1002 21:18:58.504207  537395 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:18:58.504648  537395 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:18:58.504872  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .DriverName
	I1002 21:18:58.505155  537395 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:18:58.505610  537395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:18:58.505655  537395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:18:58.521956  537395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I1002 21:18:58.522895  537395 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:18:58.523523  537395 main.go:141] libmachine: Using API Version  1
	I1002 21:18:58.523547  537395 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:18:58.523975  537395 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:18:58.524215  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .DriverName
	I1002 21:18:58.567293  537395 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 21:18:58.568368  537395 start.go:304] selected driver: kvm2
	I1002 21:18:58.568383  537395 start.go:924] validating driver "kvm2" against &{Name:cert-expiration-852898 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.34.1 ClusterName:cert-expiration-852898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:18:58.568493  537395 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:18:58.569173  537395 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:18:58.569260  537395 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 21:18:58.587991  537395 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 21:18:58.588018  537395 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 21:18:58.607082  537395 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 21:18:58.607629  537395 cni.go:84] Creating CNI manager for ""
	I1002 21:18:58.607694  537395 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:18:58.607836  537395 start.go:348] cluster config:
	{Name:cert-expiration-852898 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-852898 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:18:58.607994  537395 iso.go:125] acquiring lock: {Name:mk7586bb79dc7f44da54ee16895643204aac50ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:18:58.610468  537395 out.go:179] * Starting "cert-expiration-852898" primary control-plane node in "cert-expiration-852898" cluster
	I1002 21:18:56.076187  537126 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 21:18:56.076390  537126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:18:56.076445  537126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:18:56.092499  537126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I1002 21:18:56.093119  537126 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:18:56.093841  537126 main.go:141] libmachine: Using API Version  1
	I1002 21:18:56.093868  537126 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:18:56.094301  537126 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:18:56.094523  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetMachineName
	I1002 21:18:56.094665  537126 main.go:141] libmachine: (no-preload-397715) Calling .DriverName
	I1002 21:18:56.094822  537126 start.go:159] libmachine.API.Create for "no-preload-397715" (driver="kvm2")
	I1002 21:18:56.094848  537126 client.go:168] LocalClient.Create starting
	I1002 21:18:56.094881  537126 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem
	I1002 21:18:56.094927  537126 main.go:141] libmachine: Decoding PEM data...
	I1002 21:18:56.094946  537126 main.go:141] libmachine: Parsing certificate...
	I1002 21:18:56.095017  537126 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem
	I1002 21:18:56.095043  537126 main.go:141] libmachine: Decoding PEM data...
	I1002 21:18:56.095060  537126 main.go:141] libmachine: Parsing certificate...
	I1002 21:18:56.095090  537126 main.go:141] libmachine: Running pre-create checks...
	I1002 21:18:56.095103  537126 main.go:141] libmachine: (no-preload-397715) Calling .PreCreateCheck
	I1002 21:18:56.095502  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetConfigRaw
	I1002 21:18:56.096286  537126 main.go:141] libmachine: Creating machine...
	I1002 21:18:56.096311  537126 main.go:141] libmachine: (no-preload-397715) Calling .Create
	I1002 21:18:56.097849  537126 main.go:141] libmachine: (no-preload-397715) creating domain...
	I1002 21:18:56.097876  537126 main.go:141] libmachine: (no-preload-397715) creating network...
	I1002 21:18:56.099123  537126 main.go:141] libmachine: (no-preload-397715) DBG | found existing default network
	I1002 21:18:56.099312  537126 main.go:141] libmachine: (no-preload-397715) DBG | <network connections='3'>
	I1002 21:18:56.099346  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <name>default</name>
	I1002 21:18:56.099359  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 21:18:56.099376  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <forward mode='nat'>
	I1002 21:18:56.099385  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <nat>
	I1002 21:18:56.099401  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <port start='1024' end='65535'/>
	I1002 21:18:56.099436  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </nat>
	I1002 21:18:56.099469  537126 main.go:141] libmachine: (no-preload-397715) DBG |   </forward>
	I1002 21:18:56.099486  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 21:18:56.099508  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 21:18:56.099524  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 21:18:56.099533  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <dhcp>
	I1002 21:18:56.099544  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 21:18:56.099558  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </dhcp>
	I1002 21:18:56.099571  537126 main.go:141] libmachine: (no-preload-397715) DBG |   </ip>
	I1002 21:18:56.099578  537126 main.go:141] libmachine: (no-preload-397715) DBG | </network>
	I1002 21:18:56.099592  537126 main.go:141] libmachine: (no-preload-397715) DBG | 
	I1002 21:18:56.100292  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:56.100155  537305 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:74:ca:a3} reservation:<nil>}
	I1002 21:18:56.100954  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:56.100841  537305 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:74:7b:40} reservation:<nil>}
	I1002 21:18:56.101679  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:56.101619  537305 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000318150}
	I1002 21:18:56.101739  537126 main.go:141] libmachine: (no-preload-397715) DBG | defining private network:
	I1002 21:18:56.101759  537126 main.go:141] libmachine: (no-preload-397715) DBG | 
	I1002 21:18:56.101769  537126 main.go:141] libmachine: (no-preload-397715) DBG | <network>
	I1002 21:18:56.101785  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <name>mk-no-preload-397715</name>
	I1002 21:18:56.101795  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <dns enable='no'/>
	I1002 21:18:56.101807  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1002 21:18:56.101818  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <dhcp>
	I1002 21:18:56.101827  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1002 21:18:56.101838  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </dhcp>
	I1002 21:18:56.101844  537126 main.go:141] libmachine: (no-preload-397715) DBG |   </ip>
	I1002 21:18:56.101855  537126 main.go:141] libmachine: (no-preload-397715) DBG | </network>
	I1002 21:18:56.101861  537126 main.go:141] libmachine: (no-preload-397715) DBG | 
	I1002 21:18:56.107928  537126 main.go:141] libmachine: (no-preload-397715) DBG | creating private network mk-no-preload-397715 192.168.61.0/24...
	I1002 21:18:56.184323  537126 main.go:141] libmachine: (no-preload-397715) DBG | private network mk-no-preload-397715 192.168.61.0/24 created
	I1002 21:18:56.184649  537126 main.go:141] libmachine: (no-preload-397715) DBG | <network>
	I1002 21:18:56.184674  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <name>mk-no-preload-397715</name>
	I1002 21:18:56.184690  537126 main.go:141] libmachine: (no-preload-397715) setting up store path in /home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715 ...
	I1002 21:18:56.184733  537126 main.go:141] libmachine: (no-preload-397715) building disk image from file:///home/jenkins/minikube-integration/21682-492630/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 21:18:56.184750  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <uuid>12eb0ad0-b1f3-449c-8d25-4e03662856fb</uuid>
	I1002 21:18:56.184767  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1002 21:18:56.184781  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <mac address='52:54:00:bc:ae:4f'/>
	I1002 21:18:56.184829  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <dns enable='no'/>
	I1002 21:18:56.184860  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1002 21:18:56.184877  537126 main.go:141] libmachine: (no-preload-397715) Downloading /home/jenkins/minikube-integration/21682-492630/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21682-492630/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1002 21:18:56.184893  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <dhcp>
	I1002 21:18:56.184908  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1002 21:18:56.184914  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </dhcp>
	I1002 21:18:56.184922  537126 main.go:141] libmachine: (no-preload-397715) DBG |   </ip>
	I1002 21:18:56.184932  537126 main.go:141] libmachine: (no-preload-397715) DBG | </network>
	I1002 21:18:56.184949  537126 main.go:141] libmachine: (no-preload-397715) DBG | 
	I1002 21:18:56.184965  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:56.184626  537305 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 21:18:56.477646  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:56.477487  537305 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/id_rsa...
	I1002 21:18:56.807483  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:56.807337  537305 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/no-preload-397715.rawdisk...
	I1002 21:18:56.807514  537126 main.go:141] libmachine: (no-preload-397715) DBG | Writing magic tar header
	I1002 21:18:56.807574  537126 main.go:141] libmachine: (no-preload-397715) DBG | Writing SSH key tar header
	I1002 21:18:56.807624  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:56.807473  537305 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715 ...
	I1002 21:18:56.807652  537126 main.go:141] libmachine: (no-preload-397715) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715
	I1002 21:18:56.807661  537126 main.go:141] libmachine: (no-preload-397715) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21682-492630/.minikube/machines
	I1002 21:18:56.807676  537126 main.go:141] libmachine: (no-preload-397715) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 21:18:56.807690  537126 main.go:141] libmachine: (no-preload-397715) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21682-492630
	I1002 21:18:56.807729  537126 main.go:141] libmachine: (no-preload-397715) setting executable bit set on /home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715 (perms=drwx------)
	I1002 21:18:56.807753  537126 main.go:141] libmachine: (no-preload-397715) setting executable bit set on /home/jenkins/minikube-integration/21682-492630/.minikube/machines (perms=drwxr-xr-x)
	I1002 21:18:56.807761  537126 main.go:141] libmachine: (no-preload-397715) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 21:18:56.807770  537126 main.go:141] libmachine: (no-preload-397715) DBG | checking permissions on dir: /home/jenkins
	I1002 21:18:56.807778  537126 main.go:141] libmachine: (no-preload-397715) DBG | checking permissions on dir: /home
	I1002 21:18:56.807786  537126 main.go:141] libmachine: (no-preload-397715) DBG | skipping /home - not owner
	I1002 21:18:56.807825  537126 main.go:141] libmachine: (no-preload-397715) setting executable bit set on /home/jenkins/minikube-integration/21682-492630/.minikube (perms=drwxr-xr-x)
	I1002 21:18:56.807848  537126 main.go:141] libmachine: (no-preload-397715) setting executable bit set on /home/jenkins/minikube-integration/21682-492630 (perms=drwxrwxr-x)
	I1002 21:18:56.807866  537126 main.go:141] libmachine: (no-preload-397715) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 21:18:56.807879  537126 main.go:141] libmachine: (no-preload-397715) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 21:18:56.807893  537126 main.go:141] libmachine: (no-preload-397715) defining domain...
	I1002 21:18:56.809138  537126 main.go:141] libmachine: (no-preload-397715) defining domain using XML: 
	I1002 21:18:56.809162  537126 main.go:141] libmachine: (no-preload-397715) <domain type='kvm'>
	I1002 21:18:56.809172  537126 main.go:141] libmachine: (no-preload-397715)   <name>no-preload-397715</name>
	I1002 21:18:56.809179  537126 main.go:141] libmachine: (no-preload-397715)   <memory unit='MiB'>3072</memory>
	I1002 21:18:56.809208  537126 main.go:141] libmachine: (no-preload-397715)   <vcpu>2</vcpu>
	I1002 21:18:56.809226  537126 main.go:141] libmachine: (no-preload-397715)   <features>
	I1002 21:18:56.809234  537126 main.go:141] libmachine: (no-preload-397715)     <acpi/>
	I1002 21:18:56.809248  537126 main.go:141] libmachine: (no-preload-397715)     <apic/>
	I1002 21:18:56.809277  537126 main.go:141] libmachine: (no-preload-397715)     <pae/>
	I1002 21:18:56.809297  537126 main.go:141] libmachine: (no-preload-397715)   </features>
	I1002 21:18:56.809308  537126 main.go:141] libmachine: (no-preload-397715)   <cpu mode='host-passthrough'>
	I1002 21:18:56.809318  537126 main.go:141] libmachine: (no-preload-397715)   </cpu>
	I1002 21:18:56.809328  537126 main.go:141] libmachine: (no-preload-397715)   <os>
	I1002 21:18:56.809338  537126 main.go:141] libmachine: (no-preload-397715)     <type>hvm</type>
	I1002 21:18:56.809350  537126 main.go:141] libmachine: (no-preload-397715)     <boot dev='cdrom'/>
	I1002 21:18:56.809360  537126 main.go:141] libmachine: (no-preload-397715)     <boot dev='hd'/>
	I1002 21:18:56.809370  537126 main.go:141] libmachine: (no-preload-397715)     <bootmenu enable='no'/>
	I1002 21:18:56.809384  537126 main.go:141] libmachine: (no-preload-397715)   </os>
	I1002 21:18:56.809395  537126 main.go:141] libmachine: (no-preload-397715)   <devices>
	I1002 21:18:56.809404  537126 main.go:141] libmachine: (no-preload-397715)     <disk type='file' device='cdrom'>
	I1002 21:18:56.809430  537126 main.go:141] libmachine: (no-preload-397715)       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/boot2docker.iso'/>
	I1002 21:18:56.809441  537126 main.go:141] libmachine: (no-preload-397715)       <target dev='hdc' bus='scsi'/>
	I1002 21:18:56.809453  537126 main.go:141] libmachine: (no-preload-397715)       <readonly/>
	I1002 21:18:56.809461  537126 main.go:141] libmachine: (no-preload-397715)     </disk>
	I1002 21:18:56.809476  537126 main.go:141] libmachine: (no-preload-397715)     <disk type='file' device='disk'>
	I1002 21:18:56.809488  537126 main.go:141] libmachine: (no-preload-397715)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 21:18:56.809503  537126 main.go:141] libmachine: (no-preload-397715)       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/no-preload-397715.rawdisk'/>
	I1002 21:18:56.809511  537126 main.go:141] libmachine: (no-preload-397715)       <target dev='hda' bus='virtio'/>
	I1002 21:18:56.809522  537126 main.go:141] libmachine: (no-preload-397715)     </disk>
	I1002 21:18:56.809530  537126 main.go:141] libmachine: (no-preload-397715)     <interface type='network'>
	I1002 21:18:56.809542  537126 main.go:141] libmachine: (no-preload-397715)       <source network='mk-no-preload-397715'/>
	I1002 21:18:56.809551  537126 main.go:141] libmachine: (no-preload-397715)       <model type='virtio'/>
	I1002 21:18:56.809557  537126 main.go:141] libmachine: (no-preload-397715)     </interface>
	I1002 21:18:56.809563  537126 main.go:141] libmachine: (no-preload-397715)     <interface type='network'>
	I1002 21:18:56.809576  537126 main.go:141] libmachine: (no-preload-397715)       <source network='default'/>
	I1002 21:18:56.809583  537126 main.go:141] libmachine: (no-preload-397715)       <model type='virtio'/>
	I1002 21:18:56.809614  537126 main.go:141] libmachine: (no-preload-397715)     </interface>
	I1002 21:18:56.809640  537126 main.go:141] libmachine: (no-preload-397715)     <serial type='pty'>
	I1002 21:18:56.809656  537126 main.go:141] libmachine: (no-preload-397715)       <target port='0'/>
	I1002 21:18:56.809668  537126 main.go:141] libmachine: (no-preload-397715)     </serial>
	I1002 21:18:56.809689  537126 main.go:141] libmachine: (no-preload-397715)     <console type='pty'>
	I1002 21:18:56.809723  537126 main.go:141] libmachine: (no-preload-397715)       <target type='serial' port='0'/>
	I1002 21:18:56.809742  537126 main.go:141] libmachine: (no-preload-397715)     </console>
	I1002 21:18:56.809761  537126 main.go:141] libmachine: (no-preload-397715)     <rng model='virtio'>
	I1002 21:18:56.809776  537126 main.go:141] libmachine: (no-preload-397715)       <backend model='random'>/dev/random</backend>
	I1002 21:18:56.809785  537126 main.go:141] libmachine: (no-preload-397715)     </rng>
	I1002 21:18:56.809794  537126 main.go:141] libmachine: (no-preload-397715)   </devices>
	I1002 21:18:56.809803  537126 main.go:141] libmachine: (no-preload-397715) </domain>
	I1002 21:18:56.809812  537126 main.go:141] libmachine: (no-preload-397715) 
	I1002 21:18:56.813676  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:1a:3f:6f in network default
	I1002 21:18:56.814276  537126 main.go:141] libmachine: (no-preload-397715) starting domain...
	I1002 21:18:56.814293  537126 main.go:141] libmachine: (no-preload-397715) ensuring networks are active...
	I1002 21:18:56.814301  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:18:56.815072  537126 main.go:141] libmachine: (no-preload-397715) Ensuring network default is active
	I1002 21:18:56.815458  537126 main.go:141] libmachine: (no-preload-397715) Ensuring network mk-no-preload-397715 is active
	I1002 21:18:56.816181  537126 main.go:141] libmachine: (no-preload-397715) getting domain XML...
	I1002 21:18:56.817483  537126 main.go:141] libmachine: (no-preload-397715) DBG | starting domain XML:
	I1002 21:18:56.817504  537126 main.go:141] libmachine: (no-preload-397715) DBG | <domain type='kvm'>
	I1002 21:18:56.817512  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <name>no-preload-397715</name>
	I1002 21:18:56.817522  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <uuid>abb10914-4c2c-400e-b9a8-4eee8d9c2023</uuid>
	I1002 21:18:56.817528  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <memory unit='KiB'>3145728</memory>
	I1002 21:18:56.817532  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1002 21:18:56.817538  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 21:18:56.817542  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <os>
	I1002 21:18:56.817549  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 21:18:56.817553  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <boot dev='cdrom'/>
	I1002 21:18:56.817558  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <boot dev='hd'/>
	I1002 21:18:56.817565  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <bootmenu enable='no'/>
	I1002 21:18:56.817570  537126 main.go:141] libmachine: (no-preload-397715) DBG |   </os>
	I1002 21:18:56.817574  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <features>
	I1002 21:18:56.817579  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <acpi/>
	I1002 21:18:56.817583  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <apic/>
	I1002 21:18:56.817588  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <pae/>
	I1002 21:18:56.817592  537126 main.go:141] libmachine: (no-preload-397715) DBG |   </features>
	I1002 21:18:56.817601  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 21:18:56.817616  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <clock offset='utc'/>
	I1002 21:18:56.817629  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 21:18:56.817639  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <on_reboot>restart</on_reboot>
	I1002 21:18:56.817649  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <on_crash>destroy</on_crash>
	I1002 21:18:56.817653  537126 main.go:141] libmachine: (no-preload-397715) DBG |   <devices>
	I1002 21:18:56.817661  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 21:18:56.817666  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <disk type='file' device='cdrom'>
	I1002 21:18:56.817674  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <driver name='qemu' type='raw'/>
	I1002 21:18:56.817681  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/boot2docker.iso'/>
	I1002 21:18:56.817693  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 21:18:56.817718  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <readonly/>
	I1002 21:18:56.817733  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 21:18:56.817740  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </disk>
	I1002 21:18:56.817753  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <disk type='file' device='disk'>
	I1002 21:18:56.817760  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 21:18:56.817769  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <source file='/home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/no-preload-397715.rawdisk'/>
	I1002 21:18:56.817777  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <target dev='hda' bus='virtio'/>
	I1002 21:18:56.817794  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 21:18:56.817808  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </disk>
	I1002 21:18:56.817821  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 21:18:56.817831  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 21:18:56.817840  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </controller>
	I1002 21:18:56.817849  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 21:18:56.817856  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 21:18:56.817868  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 21:18:56.817897  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </controller>
	I1002 21:18:56.817920  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <interface type='network'>
	I1002 21:18:56.817931  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <mac address='52:54:00:b7:18:7b'/>
	I1002 21:18:56.817941  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <source network='mk-no-preload-397715'/>
	I1002 21:18:56.817950  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <model type='virtio'/>
	I1002 21:18:56.817964  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 21:18:56.817971  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </interface>
	I1002 21:18:56.817980  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <interface type='network'>
	I1002 21:18:56.817989  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <mac address='52:54:00:1a:3f:6f'/>
	I1002 21:18:56.818002  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <source network='default'/>
	I1002 21:18:56.818011  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <model type='virtio'/>
	I1002 21:18:56.818022  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 21:18:56.818030  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </interface>
	I1002 21:18:56.818040  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <serial type='pty'>
	I1002 21:18:56.818049  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <target type='isa-serial' port='0'>
	I1002 21:18:56.818060  537126 main.go:141] libmachine: (no-preload-397715) DBG |         <model name='isa-serial'/>
	I1002 21:18:56.818081  537126 main.go:141] libmachine: (no-preload-397715) DBG |       </target>
	I1002 21:18:56.818091  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </serial>
	I1002 21:18:56.818097  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <console type='pty'>
	I1002 21:18:56.818104  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <target type='serial' port='0'/>
	I1002 21:18:56.818113  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </console>
	I1002 21:18:56.818121  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <input type='mouse' bus='ps2'/>
	I1002 21:18:56.818134  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 21:18:56.818142  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <audio id='1' type='none'/>
	I1002 21:18:56.818158  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <memballoon model='virtio'>
	I1002 21:18:56.818175  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 21:18:56.818184  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </memballoon>
	I1002 21:18:56.818189  537126 main.go:141] libmachine: (no-preload-397715) DBG |     <rng model='virtio'>
	I1002 21:18:56.818202  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <backend model='random'>/dev/random</backend>
	I1002 21:18:56.818216  537126 main.go:141] libmachine: (no-preload-397715) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 21:18:56.818227  537126 main.go:141] libmachine: (no-preload-397715) DBG |     </rng>
	I1002 21:18:56.818233  537126 main.go:141] libmachine: (no-preload-397715) DBG |   </devices>
	I1002 21:18:56.818269  537126 main.go:141] libmachine: (no-preload-397715) DBG | </domain>
	I1002 21:18:56.818294  537126 main.go:141] libmachine: (no-preload-397715) DBG | 
	I1002 21:18:58.254851  537126 main.go:141] libmachine: (no-preload-397715) waiting for domain to start...
	I1002 21:18:58.256954  537126 main.go:141] libmachine: (no-preload-397715) domain is now running
	I1002 21:18:58.256985  537126 main.go:141] libmachine: (no-preload-397715) waiting for IP...
	I1002 21:18:58.257587  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:18:58.258237  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:18:58.258264  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:18:58.258731  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:18:58.258758  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:58.258613  537305 retry.go:31] will retry after 225.903524ms: waiting for domain to come up
	I1002 21:18:58.486915  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:18:58.487822  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:18:58.487848  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:18:58.488320  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:18:58.488353  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:58.488229  537305 retry.go:31] will retry after 251.18922ms: waiting for domain to come up
	I1002 21:18:58.741447  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:18:58.743193  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:18:58.743224  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:18:58.743922  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:18:58.743951  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:58.743815  537305 retry.go:31] will retry after 295.809589ms: waiting for domain to come up
	I1002 21:18:59.041737  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:18:59.042454  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:18:59.042479  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:18:59.042893  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:18:59.042928  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:59.042844  537305 retry.go:31] will retry after 393.988113ms: waiting for domain to come up
	I1002 21:18:59.438723  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:18:59.439434  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:18:59.439460  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:18:59.439880  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:18:59.439939  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:18:59.439874  537305 retry.go:31] will retry after 713.404344ms: waiting for domain to come up
	I1002 21:18:57.798261  536818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:18:57.813521  536818 kubeadm.go:883] updating cluster {Name:old-k8s-version-166937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-166937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:18:57.813677  536818 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 21:18:57.813755  536818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:18:57.851587  536818 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I1002 21:18:57.851667  536818 ssh_runner.go:195] Run: which lz4
	I1002 21:18:57.855851  536818 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 21:18:57.860736  536818 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 21:18:57.860767  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I1002 21:18:59.573016  536818 crio.go:462] duration metric: took 1.717202942s to copy over tarball
	I1002 21:18:59.573146  536818 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 21:19:01.526305  536818 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.953120641s)
	I1002 21:19:01.526345  536818 crio.go:469] duration metric: took 1.953274558s to extract the tarball
	I1002 21:19:01.526357  536818 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 21:19:01.582147  536818 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:19:01.628965  536818 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:19:01.628995  536818 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:19:01.629006  536818 kubeadm.go:934] updating node { 192.168.72.161 8443 v1.28.0 crio true true} ...
	I1002 21:19:01.629218  536818 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-166937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-166937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:19:01.629353  536818 ssh_runner.go:195] Run: crio config
	I1002 21:19:01.680258  536818 cni.go:84] Creating CNI manager for ""
	I1002 21:19:01.680283  536818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:19:01.680303  536818 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:19:01.680325  536818 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.161 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-166937 NodeName:old-k8s-version-166937 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:19:01.680454  536818 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-166937"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:19:01.680516  536818 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1002 21:19:01.693465  536818 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:19:01.693539  536818 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:19:01.708486  536818 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1002 21:19:01.730142  536818 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:19:01.750409  536818 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1002 21:19:01.772732  536818 ssh_runner.go:195] Run: grep 192.168.72.161	control-plane.minikube.internal$ /etc/hosts
	I1002 21:19:01.778046  536818 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.161	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:19:01.793354  536818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:19:01.960365  536818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:19:01.981460  536818 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937 for IP: 192.168.72.161
	I1002 21:19:01.981485  536818 certs.go:195] generating shared ca certs ...
	I1002 21:19:01.981507  536818 certs.go:227] acquiring lock for ca certs: {Name:mk99bb18e623cf4cf4a4efda3dab88668aa481a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:01.981754  536818 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key
	I1002 21:19:01.981825  536818 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key
	I1002 21:19:01.981839  536818 certs.go:257] generating profile certs ...
	I1002 21:19:01.981921  536818 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.key
	I1002 21:19:01.981950  536818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt with IP's: []
	I1002 21:19:02.435467  536818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt ...
	I1002 21:19:02.435499  536818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: {Name:mk5e462fb9dd310f26fddd3d7f1a07849180c185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:02.435732  536818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.key ...
	I1002 21:19:02.435756  536818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.key: {Name:mkbaf4576ceedb239d876abb767b0fdca4b57c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:02.435883  536818 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.key.2b01db07
	I1002 21:19:02.435902  536818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.crt.2b01db07 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.161]
	I1002 21:19:02.625030  536818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.crt.2b01db07 ...
	I1002 21:19:02.625062  536818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.crt.2b01db07: {Name:mk9b2f4cac576cc1878911d447bcc159bb6c329f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:02.625226  536818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.key.2b01db07 ...
	I1002 21:19:02.625241  536818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.key.2b01db07: {Name:mkbdb83e30d17dba993c43caa9c89ea148a14146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:02.625316  536818 certs.go:382] copying /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.crt.2b01db07 -> /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.crt
	I1002 21:19:02.625414  536818 certs.go:386] copying /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.key.2b01db07 -> /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.key
	I1002 21:19:02.625479  536818 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/proxy-client.key
	I1002 21:19:02.625496  536818 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/proxy-client.crt with IP's: []
	I1002 21:18:58.611616  537395 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:18:58.611679  537395 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:18:58.611688  537395 cache.go:58] Caching tarball of preloaded images
	I1002 21:18:58.611835  537395 preload.go:233] Found /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:18:58.611846  537395 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:18:58.611970  537395 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/cert-expiration-852898/config.json ...
	I1002 21:18:58.612260  537395 start.go:360] acquireMachinesLock for cert-expiration-852898: {Name:mk9e7957cdce1fd4b26ce430105927ec465bcae0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 21:19:00.155530  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:00.156493  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:19:00.156531  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:19:00.156927  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:19:00.156959  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:19:00.156882  537305 retry.go:31] will retry after 686.158814ms: waiting for domain to come up
	I1002 21:19:00.845238  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:00.846070  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:19:00.846103  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:19:00.846459  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:19:00.846484  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:19:00.846435  537305 retry.go:31] will retry after 972.201152ms: waiting for domain to come up
	I1002 21:19:01.820013  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:01.820780  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:19:01.820808  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:19:01.821212  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:19:01.821243  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:19:01.821171  537305 retry.go:31] will retry after 1.20001085s: waiting for domain to come up
	I1002 21:19:03.022380  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:03.023007  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:19:03.023038  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:19:03.023353  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:19:03.023382  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:19:03.023343  537305 retry.go:31] will retry after 1.789741431s: waiting for domain to come up
	I1002 21:19:04.814948  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:04.815570  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:19:04.815594  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:19:04.815938  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:19:04.816016  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:19:04.815935  537305 retry.go:31] will retry after 2.291155845s: waiting for domain to come up
	I1002 21:19:02.840158  536818 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/proxy-client.crt ...
	I1002 21:19:02.840191  536818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/proxy-client.crt: {Name:mk6f9e4d6ccf164223e84f585045d94ac0ed2cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:02.840413  536818 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/proxy-client.key ...
	I1002 21:19:02.840434  536818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/proxy-client.key: {Name:mkb393345dd5e00c202e76c8f81bd7b8928f92db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:02.840666  536818 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569.pem (1338 bytes)
	W1002 21:19:02.840742  536818 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569_empty.pem, impossibly tiny 0 bytes
	I1002 21:19:02.840760  536818 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:19:02.840796  536818 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:19:02.840830  536818 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:19:02.840865  536818 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem (1675 bytes)
	I1002 21:19:02.840914  536818 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem (1708 bytes)
	I1002 21:19:02.841487  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:19:02.871649  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:19:02.901820  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:19:02.929313  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:19:02.959525  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 21:19:02.987929  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:19:03.019993  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:19:03.052379  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:19:03.084774  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem --> /usr/share/ca-certificates/4975692.pem (1708 bytes)
	I1002 21:19:03.117944  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:19:03.151605  536818 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569.pem --> /usr/share/ca-certificates/497569.pem (1338 bytes)
	I1002 21:19:03.182587  536818 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:19:03.205209  536818 ssh_runner.go:195] Run: openssl version
	I1002 21:19:03.211760  536818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4975692.pem && ln -fs /usr/share/ca-certificates/4975692.pem /etc/ssl/certs/4975692.pem"
	I1002 21:19:03.226350  536818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4975692.pem
	I1002 21:19:03.231634  536818 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:27 /usr/share/ca-certificates/4975692.pem
	I1002 21:19:03.231692  536818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4975692.pem
	I1002 21:19:03.238951  536818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4975692.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:19:03.251910  536818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:19:03.265305  536818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:19:03.270392  536818 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:19:03.270454  536818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:19:03.277442  536818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:19:03.290434  536818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497569.pem && ln -fs /usr/share/ca-certificates/497569.pem /etc/ssl/certs/497569.pem"
	I1002 21:19:03.302597  536818 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497569.pem
	I1002 21:19:03.307617  536818 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:27 /usr/share/ca-certificates/497569.pem
	I1002 21:19:03.307676  536818 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497569.pem
	I1002 21:19:03.316378  536818 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497569.pem /etc/ssl/certs/51391683.0"
	I1002 21:19:03.330542  536818 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:19:03.335583  536818 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:19:03.335653  536818 kubeadm.go:400] StartCluster: {Name:old-k8s-version-166937 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-166937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:19:03.335770  536818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:19:03.335856  536818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:19:03.373043  536818 cri.go:89] found id: ""
	I1002 21:19:03.373137  536818 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:19:03.385257  536818 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:19:03.396112  536818 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:19:03.407248  536818 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:19:03.407268  536818 kubeadm.go:157] found existing configuration files:
	
	I1002 21:19:03.407324  536818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:19:03.417918  536818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:19:03.417981  536818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:19:03.429726  536818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:19:03.445960  536818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:19:03.446035  536818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:19:03.460882  536818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:19:03.473660  536818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:19:03.473752  536818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:19:03.486065  536818 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:19:03.499458  536818 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:19:03.499537  536818 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:19:03.514096  536818 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 21:19:03.693546  536818 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:07.108813  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:07.109568  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:19:07.109599  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:19:07.110071  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:19:07.110128  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:19:07.110058  537305 retry.go:31] will retry after 2.183204439s: waiting for domain to come up
	I1002 21:19:09.294523  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:09.295203  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:19:09.295242  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:19:09.295574  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:19:09.295607  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:19:09.295561  537305 retry.go:31] will retry after 2.228574213s: waiting for domain to come up
	I1002 21:19:14.494559  536818 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1002 21:19:14.494618  536818 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:19:14.494763  536818 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:19:14.494903  536818 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:19:14.495073  536818 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 21:19:14.495179  536818 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:19:14.496402  536818 out.go:252]   - Generating certificates and keys ...
	I1002 21:19:14.496477  536818 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:19:14.496535  536818 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:19:14.496650  536818 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:19:14.496767  536818 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:19:14.496853  536818 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:19:14.496937  536818 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:19:14.497024  536818 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:19:14.497226  536818 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-166937] and IPs [192.168.72.161 127.0.0.1 ::1]
	I1002 21:19:14.497277  536818 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:19:14.497448  536818 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-166937] and IPs [192.168.72.161 127.0.0.1 ::1]
	I1002 21:19:14.497541  536818 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:19:14.497636  536818 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:19:14.497690  536818 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:19:14.497780  536818 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:19:14.497862  536818 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:19:14.497947  536818 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:19:14.498043  536818 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:19:14.498133  536818 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:19:14.498270  536818 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:19:14.498369  536818 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:19:14.499324  536818 out.go:252]   - Booting up control plane ...
	I1002 21:19:14.499420  536818 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:19:14.499511  536818 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:19:14.499630  536818 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:19:14.499796  536818 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:19:14.499933  536818 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:19:14.499993  536818 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:19:14.500212  536818 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 21:19:14.500306  536818 kubeadm.go:318] [apiclient] All control plane components are healthy after 6.502237 seconds
	I1002 21:19:14.500403  536818 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:19:14.500502  536818 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:19:14.500549  536818 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:19:14.500738  536818 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-166937 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:19:14.500810  536818 kubeadm.go:318] [bootstrap-token] Using token: pari5u.i02l11imt0mrvpkt
	I1002 21:19:14.501924  536818 out.go:252]   - Configuring RBAC rules ...
	I1002 21:19:14.502058  536818 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:19:14.502180  536818 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:19:14.502379  536818 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:19:14.502554  536818 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:19:14.502723  536818 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:19:14.502841  536818 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:19:14.503004  536818 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:19:14.503070  536818 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:19:14.503118  536818 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:19:14.503125  536818 kubeadm.go:318] 
	I1002 21:19:14.503179  536818 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:19:14.503185  536818 kubeadm.go:318] 
	I1002 21:19:14.503248  536818 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:19:14.503253  536818 kubeadm.go:318] 
	I1002 21:19:14.503274  536818 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:19:14.503328  536818 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:19:14.503372  536818 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:19:14.503377  536818 kubeadm.go:318] 
	I1002 21:19:14.503445  536818 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:19:14.503459  536818 kubeadm.go:318] 
	I1002 21:19:14.503512  536818 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:19:14.503523  536818 kubeadm.go:318] 
	I1002 21:19:14.503570  536818 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:19:14.503631  536818 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:19:14.503688  536818 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:19:14.503694  536818 kubeadm.go:318] 
	I1002 21:19:14.503843  536818 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:19:14.503971  536818 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:19:14.503980  536818 kubeadm.go:318] 
	I1002 21:19:14.504083  536818 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pari5u.i02l11imt0mrvpkt \
	I1002 21:19:14.504210  536818 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:32d7d270f6a5dfe43597582240b68ebe9df949068deb05a8c74918e20d720da3 \
	I1002 21:19:14.504256  536818 kubeadm.go:318] 	--control-plane 
	I1002 21:19:14.504264  536818 kubeadm.go:318] 
	I1002 21:19:14.504366  536818 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:19:14.504373  536818 kubeadm.go:318] 
	I1002 21:19:14.504472  536818 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pari5u.i02l11imt0mrvpkt \
	I1002 21:19:14.504613  536818 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:32d7d270f6a5dfe43597582240b68ebe9df949068deb05a8c74918e20d720da3 
	I1002 21:19:14.504628  536818 cni.go:84] Creating CNI manager for ""
	I1002 21:19:14.504636  536818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:19:14.505880  536818 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 21:19:11.525792  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:11.526537  537126 main.go:141] libmachine: (no-preload-397715) DBG | no network interface addresses found for domain no-preload-397715 (source=lease)
	I1002 21:19:11.526572  537126 main.go:141] libmachine: (no-preload-397715) DBG | trying to list again with source=arp
	I1002 21:19:11.526894  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find current IP address of domain no-preload-397715 in network mk-no-preload-397715 (interfaces detected: [])
	I1002 21:19:11.526924  537126 main.go:141] libmachine: (no-preload-397715) DBG | I1002 21:19:11.526868  537305 retry.go:31] will retry after 3.233162236s: waiting for domain to come up
	I1002 21:19:14.763907  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:14.764624  537126 main.go:141] libmachine: (no-preload-397715) found domain IP: 192.168.61.202
	I1002 21:19:14.764651  537126 main.go:141] libmachine: (no-preload-397715) reserving static IP address...
	I1002 21:19:14.764664  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has current primary IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:14.765128  537126 main.go:141] libmachine: (no-preload-397715) DBG | unable to find host DHCP lease matching {name: "no-preload-397715", mac: "52:54:00:b7:18:7b", ip: "192.168.61.202"} in network mk-no-preload-397715
	I1002 21:19:14.981659  537126 main.go:141] libmachine: (no-preload-397715) DBG | Getting to WaitForSSH function...
	I1002 21:19:14.981697  537126 main.go:141] libmachine: (no-preload-397715) reserved static IP address 192.168.61.202 for domain no-preload-397715
	I1002 21:19:14.981792  537126 main.go:141] libmachine: (no-preload-397715) waiting for SSH...
	I1002 21:19:14.984878  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:14.985288  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:14.985332  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:14.985534  537126 main.go:141] libmachine: (no-preload-397715) DBG | Using SSH client type: external
	I1002 21:19:14.985563  537126 main.go:141] libmachine: (no-preload-397715) DBG | Using SSH private key: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/id_rsa (-rw-------)
	I1002 21:19:14.985589  537126 main.go:141] libmachine: (no-preload-397715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 21:19:14.985605  537126 main.go:141] libmachine: (no-preload-397715) DBG | About to run SSH command:
	I1002 21:19:14.985615  537126 main.go:141] libmachine: (no-preload-397715) DBG | exit 0
	I1002 21:19:15.117024  537126 main.go:141] libmachine: (no-preload-397715) DBG | SSH cmd err, output: <nil>: 
	I1002 21:19:15.117419  537126 main.go:141] libmachine: (no-preload-397715) domain creation complete
	I1002 21:19:15.117803  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetConfigRaw
	I1002 21:19:15.118428  537126 main.go:141] libmachine: (no-preload-397715) Calling .DriverName
	I1002 21:19:15.118612  537126 main.go:141] libmachine: (no-preload-397715) Calling .DriverName
	I1002 21:19:15.118812  537126 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 21:19:15.118832  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetState
	I1002 21:19:15.120366  537126 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 21:19:15.120383  537126 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 21:19:15.120398  537126 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 21:19:15.120406  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:15.123135  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.123553  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:15.123584  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.123747  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:15.123950  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:15.124101  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:15.124263  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:15.124424  537126 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:15.124754  537126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I1002 21:19:15.124770  537126 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 21:19:16.666030  537164 start.go:364] duration metric: took 30.057071179s to acquireMachinesLock for "pause-128856"
	I1002 21:19:16.666081  537164 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:19:16.666092  537164 fix.go:54] fixHost starting: 
	I1002 21:19:16.666539  537164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:19:16.666594  537164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:19:16.685390  537164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I1002 21:19:16.685864  537164 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:19:16.686382  537164 main.go:141] libmachine: Using API Version  1
	I1002 21:19:16.686412  537164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:19:16.686916  537164 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:19:16.687196  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:16.687380  537164 main.go:141] libmachine: (pause-128856) Calling .GetState
	I1002 21:19:16.689234  537164 fix.go:112] recreateIfNeeded on pause-128856: state=Running err=<nil>
	W1002 21:19:16.689275  537164 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:19:14.506796  536818 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 21:19:14.529719  536818 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 21:19:14.558992  536818 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:19:14.559174  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:14.559197  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-166937 minikube.k8s.io/updated_at=2025_10_02T21_19_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=old-k8s-version-166937 minikube.k8s.io/primary=true
	I1002 21:19:14.621267  536818 ops.go:34] apiserver oom_adj: -16
	I1002 21:19:14.713363  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:15.213927  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:15.713584  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:16.213959  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:16.713955  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:17.213539  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:17.714087  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:15.224117  537126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:19:15.224145  537126 main.go:141] libmachine: Detecting the provisioner...
	I1002 21:19:15.224157  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:15.228027  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.228416  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:15.228448  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.228701  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:15.228952  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:15.229139  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:15.229328  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:15.229553  537126 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:15.229883  537126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I1002 21:19:15.229905  537126 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 21:19:15.332962  537126 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 21:19:15.333042  537126 main.go:141] libmachine: found compatible host: buildroot
	I1002 21:19:15.333053  537126 main.go:141] libmachine: Provisioning with buildroot...
	I1002 21:19:15.333062  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetMachineName
	I1002 21:19:15.333321  537126 buildroot.go:166] provisioning hostname "no-preload-397715"
	I1002 21:19:15.333359  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetMachineName
	I1002 21:19:15.333535  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:15.336856  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.337310  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:15.337344  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.337510  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:15.337721  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:15.337901  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:15.338070  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:15.338276  537126 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:15.338506  537126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I1002 21:19:15.338523  537126 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-397715 && echo "no-preload-397715" | sudo tee /etc/hostname
	I1002 21:19:15.458270  537126 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-397715
	
	I1002 21:19:15.458318  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:15.461662  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.462091  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:15.462128  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.462323  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:15.462496  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:15.462676  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:15.462839  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:15.463014  537126 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:15.463256  537126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I1002 21:19:15.463285  537126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397715/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:19:15.578665  537126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:19:15.578731  537126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21682-492630/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-492630/.minikube}
	I1002 21:19:15.578807  537126 buildroot.go:174] setting up certificates
	I1002 21:19:15.578826  537126 provision.go:84] configureAuth start
	I1002 21:19:15.578846  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetMachineName
	I1002 21:19:15.579217  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetIP
	I1002 21:19:15.583076  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.583566  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:15.583603  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.583819  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:15.586815  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.587384  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:15.587427  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.587656  537126 provision.go:143] copyHostCerts
	I1002 21:19:15.587736  537126 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem, removing ...
	I1002 21:19:15.587752  537126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem
	I1002 21:19:15.587833  537126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem (1078 bytes)
	I1002 21:19:15.587990  537126 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem, removing ...
	I1002 21:19:15.588005  537126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem
	I1002 21:19:15.588041  537126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem (1123 bytes)
	I1002 21:19:15.588165  537126 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem, removing ...
	I1002 21:19:15.588180  537126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem
	I1002 21:19:15.588210  537126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem (1675 bytes)
	I1002 21:19:15.588267  537126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem org=jenkins.no-preload-397715 san=[127.0.0.1 192.168.61.202 localhost minikube no-preload-397715]
	I1002 21:19:15.983405  537126 provision.go:177] copyRemoteCerts
	I1002 21:19:15.983477  537126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:19:15.983513  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:15.986862  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.987325  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:15.987355  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:15.987543  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:15.987760  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:15.987965  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:15.988169  537126 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/id_rsa Username:docker}
	I1002 21:19:16.069860  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:19:16.098612  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:19:16.128952  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:19:16.158084  537126 provision.go:87] duration metric: took 579.238697ms to configureAuth
	I1002 21:19:16.158117  537126 buildroot.go:189] setting minikube options for container-runtime
	I1002 21:19:16.158347  537126 config.go:182] Loaded profile config "no-preload-397715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:19:16.158450  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:16.161406  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.161842  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:16.161875  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.162081  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:16.162327  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:16.162499  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:16.162729  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:16.162929  537126 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:16.163163  537126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I1002 21:19:16.163185  537126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:19:16.410933  537126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:19:16.410976  537126 main.go:141] libmachine: Checking connection to Docker...
	I1002 21:19:16.410988  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetURL
	I1002 21:19:16.412255  537126 main.go:141] libmachine: (no-preload-397715) DBG | using libvirt version 8000000
	I1002 21:19:16.414885  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.415257  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:16.415284  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.415468  537126 main.go:141] libmachine: Docker is up and running!
	I1002 21:19:16.415483  537126 main.go:141] libmachine: Reticulating splines...
	I1002 21:19:16.415492  537126 client.go:171] duration metric: took 20.320635395s to LocalClient.Create
	I1002 21:19:16.415522  537126 start.go:167] duration metric: took 20.32070106s to libmachine.API.Create "no-preload-397715"
	I1002 21:19:16.415537  537126 start.go:293] postStartSetup for "no-preload-397715" (driver="kvm2")
	I1002 21:19:16.415549  537126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:19:16.415573  537126 main.go:141] libmachine: (no-preload-397715) Calling .DriverName
	I1002 21:19:16.415826  537126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:19:16.415851  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:16.418151  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.418484  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:16.418512  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.418652  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:16.418886  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:16.419055  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:16.419250  537126 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/id_rsa Username:docker}
	I1002 21:19:16.505848  537126 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:19:16.510824  537126 info.go:137] Remote host: Buildroot 2025.02
	I1002 21:19:16.510848  537126 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/addons for local assets ...
	I1002 21:19:16.510915  537126 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/files for local assets ...
	I1002 21:19:16.510985  537126 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem -> 4975692.pem in /etc/ssl/certs
	I1002 21:19:16.511071  537126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:19:16.521521  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem --> /etc/ssl/certs/4975692.pem (1708 bytes)
	I1002 21:19:16.551663  537126 start.go:296] duration metric: took 136.109811ms for postStartSetup
	I1002 21:19:16.551740  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetConfigRaw
	I1002 21:19:16.552377  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetIP
	I1002 21:19:16.555700  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.556118  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:16.556157  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.556423  537126 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/no-preload-397715/config.json ...
	I1002 21:19:16.556671  537126 start.go:128] duration metric: took 20.482504016s to createHost
	I1002 21:19:16.556718  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:16.559198  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.559589  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:16.559615  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.559765  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:16.559960  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:16.560142  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:16.560278  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:16.560462  537126 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:16.560736  537126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.202 22 <nil> <nil>}
	I1002 21:19:16.560751  537126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 21:19:16.665850  537126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759439956.614374993
	
	I1002 21:19:16.665880  537126 fix.go:216] guest clock: 1759439956.614374993
	I1002 21:19:16.665892  537126 fix.go:229] Guest: 2025-10-02 21:19:16.614374993 +0000 UTC Remote: 2025-10-02 21:19:16.556687371 +0000 UTC m=+31.468846439 (delta=57.687622ms)
	I1002 21:19:16.665923  537126 fix.go:200] guest clock delta is within tolerance: 57.687622ms
	I1002 21:19:16.665934  537126 start.go:83] releasing machines lock for "no-preload-397715", held for 20.591973192s
	I1002 21:19:16.665984  537126 main.go:141] libmachine: (no-preload-397715) Calling .DriverName
	I1002 21:19:16.666306  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetIP
	I1002 21:19:16.669730  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.670263  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:16.670293  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.670539  537126 main.go:141] libmachine: (no-preload-397715) Calling .DriverName
	I1002 21:19:16.671245  537126 main.go:141] libmachine: (no-preload-397715) Calling .DriverName
	I1002 21:19:16.671484  537126 main.go:141] libmachine: (no-preload-397715) Calling .DriverName
	I1002 21:19:16.671590  537126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:19:16.671665  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:16.671743  537126 ssh_runner.go:195] Run: cat /version.json
	I1002 21:19:16.671774  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHHostname
	I1002 21:19:16.675326  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.675459  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.675853  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:16.675886  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.676068  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:16.676094  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:16.676174  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:16.676379  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:16.676458  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHPort
	I1002 21:19:16.676582  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:16.676644  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHKeyPath
	I1002 21:19:16.676753  537126 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/id_rsa Username:docker}
	I1002 21:19:16.676818  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetSSHUsername
	I1002 21:19:16.676959  537126 sshutil.go:53] new ssh client: &{IP:192.168.61.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/no-preload-397715/id_rsa Username:docker}
	I1002 21:19:16.752938  537126 ssh_runner.go:195] Run: systemctl --version
	I1002 21:19:16.783832  537126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:19:16.944485  537126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:19:16.954352  537126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:19:16.954434  537126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:19:16.978266  537126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:19:16.978288  537126 start.go:495] detecting cgroup driver to use...
	I1002 21:19:16.978358  537126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:19:16.999775  537126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:19:17.017936  537126 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:19:17.018013  537126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:19:17.036217  537126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:19:17.053363  537126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:19:17.212703  537126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:19:17.456326  537126 docker.go:234] disabling docker service ...
	I1002 21:19:17.456408  537126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:19:17.473779  537126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:19:17.487854  537126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:19:17.664290  537126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:19:17.820993  537126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:19:17.836724  537126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:19:17.857685  537126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:19:17.857768  537126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:17.868973  537126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:19:17.869031  537126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:17.880623  537126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:17.891881  537126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:17.903061  537126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:19:17.916256  537126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:17.928548  537126 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:17.947454  537126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:17.958766  537126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:19:17.968719  537126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 21:19:17.968778  537126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 21:19:17.986790  537126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:19:17.997146  537126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:19:18.141615  537126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:19:18.249972  537126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:19:18.250074  537126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:19:18.255722  537126 start.go:563] Will wait 60s for crictl version
	I1002 21:19:18.255794  537126 ssh_runner.go:195] Run: which crictl
	I1002 21:19:18.259798  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 21:19:18.299261  537126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 21:19:18.299348  537126 ssh_runner.go:195] Run: crio --version
	I1002 21:19:18.328779  537126 ssh_runner.go:195] Run: crio --version
	I1002 21:19:18.358874  537126 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 21:19:18.359861  537126 main.go:141] libmachine: (no-preload-397715) Calling .GetIP
	I1002 21:19:18.363051  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:18.363443  537126 main.go:141] libmachine: (no-preload-397715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:7b", ip: ""} in network mk-no-preload-397715: {Iface:virbr1 ExpiryTime:2025-10-02 22:19:11 +0000 UTC Type:0 Mac:52:54:00:b7:18:7b Iaid: IPaddr:192.168.61.202 Prefix:24 Hostname:no-preload-397715 Clientid:01:52:54:00:b7:18:7b}
	I1002 21:19:18.363466  537126 main.go:141] libmachine: (no-preload-397715) DBG | domain no-preload-397715 has defined IP address 192.168.61.202 and MAC address 52:54:00:b7:18:7b in network mk-no-preload-397715
	I1002 21:19:18.363697  537126 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1002 21:19:18.367893  537126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:19:18.382666  537126 kubeadm.go:883] updating cluster {Name:no-preload-397715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:no-preload-397715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.202 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:19:18.382817  537126 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:19:18.382859  537126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:19:18.416031  537126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 21:19:18.416062  537126 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 21:19:18.416153  537126 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:19:18.416171  537126 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:19:18.416195  537126 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:19:18.416197  537126 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1002 21:19:18.416153  537126 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:19:18.416169  537126 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:19:18.416153  537126 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:19:18.416178  537126 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:19:18.417549  537126 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:19:18.417618  537126 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:19:18.417632  537126 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:19:18.417549  537126 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:19:18.417552  537126 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:19:18.417549  537126 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:19:18.417550  537126 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1002 21:19:18.417551  537126 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:19:18.543880  537126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:19:18.547214  537126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1002 21:19:18.552412  537126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:19:18.552640  537126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:19:18.570619  537126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:19:18.573566  537126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1002 21:19:18.579198  537126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:19:18.611655  537126 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1002 21:19:18.611738  537126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:19:18.611812  537126 ssh_runner.go:195] Run: which crictl
	I1002 21:19:18.678148  537126 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1002 21:19:18.678203  537126 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1002 21:19:18.678263  537126 ssh_runner.go:195] Run: which crictl
	I1002 21:19:18.708055  537126 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1002 21:19:18.708102  537126 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1002 21:19:18.708140  537126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:19:18.708105  537126 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:19:18.708205  537126 ssh_runner.go:195] Run: which crictl
	I1002 21:19:18.708286  537126 ssh_runner.go:195] Run: which crictl
	I1002 21:19:18.733731  537126 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1002 21:19:18.733763  537126 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1002 21:19:18.733778  537126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:19:18.733782  537126 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1002 21:19:18.733826  537126 ssh_runner.go:195] Run: which crictl
	I1002 21:19:18.733826  537126 ssh_runner.go:195] Run: which crictl
	I1002 21:19:18.741014  537126 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1002 21:19:18.741036  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:19:18.741055  537126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:19:18.741055  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:19:18.741058  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1002 21:19:18.741095  537126 ssh_runner.go:195] Run: which crictl
	I1002 21:19:18.741101  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:19:18.749521  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:19:18.749626  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1002 21:19:18.866608  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:19:18.866636  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:19:18.866744  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:19:18.866781  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:19:18.866791  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1002 21:19:18.866833  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:19:18.881152  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1002 21:19:18.999043  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1002 21:19:18.999117  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1002 21:19:18.999116  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1002 21:19:18.999046  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1002 21:19:18.999191  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:19:19.007496  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1002 21:19:19.016840  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1002 21:19:19.141776  537126 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1002 21:19:19.141840  537126 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1002 21:19:19.141877  537126 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1002 21:19:19.141902  537126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 21:19:19.141940  537126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1002 21:19:19.141955  537126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1002 21:19:19.141963  537126 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1002 21:19:19.142011  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1002 21:19:19.142034  537126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 21:19:19.142036  537126 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1002 21:19:19.142089  537126 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1002 21:19:19.142105  537126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 21:19:19.142151  537126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1002 21:19:19.157827  537126 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1002 21:19:19.157857  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1002 21:19:19.158218  537126 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1002 21:19:19.158242  537126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1002 21:19:19.158247  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1002 21:19:19.158268  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1002 21:19:19.232881  537126 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1002 21:19:19.232929  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1002 21:19:19.232881  537126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1002 21:19:19.232971  537126 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1002 21:19:19.233008  537126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1002 21:19:19.233027  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1002 21:19:19.232978  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1002 21:19:19.233090  537126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 21:19:19.352469  537126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1002 21:19:19.352510  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1002 21:19:19.386054  537126 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1002 21:19:19.386133  537126 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1002 21:19:19.773053  537126 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:19:19.840626  537126 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1002 21:19:19.840680  537126 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1002 21:19:19.840758  537126 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1002 21:19:19.875411  537126 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1002 21:19:19.875470  537126 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:19:19.875540  537126 ssh_runner.go:195] Run: which crictl
	I1002 21:19:16.692107  537164 out.go:252] * Updating the running kvm2 "pause-128856" VM ...
	I1002 21:19:16.692149  537164 machine.go:93] provisionDockerMachine start ...
	I1002 21:19:16.692170  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:16.692374  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:16.695246  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.695678  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:16.695721  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.695925  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:16.696080  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.696249  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.696368  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:16.696555  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:16.696836  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:16.696848  537164 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:19:16.812516  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-128856
	
	I1002 21:19:16.812542  537164 main.go:141] libmachine: (pause-128856) Calling .GetMachineName
	I1002 21:19:16.812963  537164 buildroot.go:166] provisioning hostname "pause-128856"
	I1002 21:19:16.812996  537164 main.go:141] libmachine: (pause-128856) Calling .GetMachineName
	I1002 21:19:16.813181  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:16.816641  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.817077  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:16.817117  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.817304  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:16.817539  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.817723  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.817899  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:16.818100  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:16.818333  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:16.818345  537164 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-128856 && echo "pause-128856" | sudo tee /etc/hostname
	I1002 21:19:16.956194  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-128856
	
	I1002 21:19:16.956241  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:16.959774  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.960211  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:16.960239  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:16.960483  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:16.960728  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.960913  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:16.961068  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:16.961261  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:16.961539  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:16.961558  537164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-128856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-128856/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-128856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:19:17.085504  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:19:17.085536  537164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21682-492630/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-492630/.minikube}
	I1002 21:19:17.085574  537164 buildroot.go:174] setting up certificates
	I1002 21:19:17.085586  537164 provision.go:84] configureAuth start
	I1002 21:19:17.085602  537164 main.go:141] libmachine: (pause-128856) Calling .GetMachineName
	I1002 21:19:17.085938  537164 main.go:141] libmachine: (pause-128856) Calling .GetIP
	I1002 21:19:17.089122  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.089647  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:17.089673  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.089894  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:17.092778  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.093295  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:17.093317  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.093522  537164 provision.go:143] copyHostCerts
	I1002 21:19:17.093586  537164 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem, removing ...
	I1002 21:19:17.093612  537164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem
	I1002 21:19:17.093676  537164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem (1078 bytes)
	I1002 21:19:17.093837  537164 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem, removing ...
	I1002 21:19:17.093850  537164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem
	I1002 21:19:17.093888  537164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem (1123 bytes)
	I1002 21:19:17.093989  537164 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem, removing ...
	I1002 21:19:17.094001  537164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem
	I1002 21:19:17.094051  537164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem (1675 bytes)
	I1002 21:19:17.094142  537164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem org=jenkins.pause-128856 san=[127.0.0.1 192.168.39.39 localhost minikube pause-128856]
	I1002 21:19:17.204083  537164 provision.go:177] copyRemoteCerts
	I1002 21:19:17.204145  537164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:19:17.204177  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:17.207371  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.207778  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:17.207813  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.208091  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:17.208315  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:17.208504  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:17.208688  537164 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/pause-128856/id_rsa Username:docker}
	I1002 21:19:17.306236  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:19:17.340938  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 21:19:17.373580  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:19:17.404515  537164 provision.go:87] duration metric: took 318.909798ms to configureAuth
	I1002 21:19:17.404552  537164 buildroot.go:189] setting minikube options for container-runtime
	I1002 21:19:17.404854  537164 config.go:182] Loaded profile config "pause-128856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:19:17.404947  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:17.408220  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.408619  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:17.408671  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:17.408873  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:17.409089  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:17.409261  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:17.409388  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:17.409565  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:17.409860  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:17.409878  537164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:19:18.214429  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:18.713954  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:19.213934  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:19.714419  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:20.213500  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:20.713540  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:21.213966  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:21.714025  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:22.213670  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:22.713606  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:23.251564  537395 start.go:364] duration metric: took 24.639264771s to acquireMachinesLock for "cert-expiration-852898"
	I1002 21:19:23.251616  537395 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:19:23.251623  537395 fix.go:54] fixHost starting: 
	I1002 21:19:23.252127  537395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:19:23.252182  537395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:19:23.271073  537395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
	I1002 21:19:23.271540  537395 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:19:23.272369  537395 main.go:141] libmachine: Using API Version  1
	I1002 21:19:23.272391  537395 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:19:23.272943  537395 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:19:23.273175  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .DriverName
	I1002 21:19:23.273345  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetState
	I1002 21:19:23.275367  537395 fix.go:112] recreateIfNeeded on cert-expiration-852898: state=Running err=<nil>
	W1002 21:19:23.275398  537395 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:19:23.276759  537395 out.go:252] * Updating the running kvm2 "cert-expiration-852898" VM ...
	I1002 21:19:23.276801  537395 machine.go:93] provisionDockerMachine start ...
	I1002 21:19:23.276817  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .DriverName
	I1002 21:19:23.277013  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:23.281228  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.281288  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:23.281749  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHPort
	I1002 21:19:23.281748  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.281921  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:23.282031  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:23.282204  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHUsername
	I1002 21:19:23.282414  537395 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:23.282817  537395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1002 21:19:23.282825  537395 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:19:23.400569  537395 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-852898
	
	I1002 21:19:23.400591  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetMachineName
	I1002 21:19:23.400892  537395 buildroot.go:166] provisioning hostname "cert-expiration-852898"
	I1002 21:19:23.400916  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetMachineName
	I1002 21:19:23.401109  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:23.404638  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.404993  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:23.405015  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.405178  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHPort
	I1002 21:19:23.405371  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:23.405549  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:23.405692  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHUsername
	I1002 21:19:23.405953  537395 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:23.406263  537395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1002 21:19:23.406281  537395 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-852898 && echo "cert-expiration-852898" | sudo tee /etc/hostname
	I1002 21:19:22.573426  537126 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.732627631s)
	I1002 21:19:22.573471  537126 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1002 21:19:22.573487  537126 ssh_runner.go:235] Completed: which crictl: (2.697921154s)
	I1002 21:19:22.573502  537126 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 21:19:22.573559  537126 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 21:19:22.573559  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:19:24.959355  537126 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.385762616s)
	I1002 21:19:24.959393  537126 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1002 21:19:24.959419  537126 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 21:19:24.959430  537126 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.385772076s)
	I1002 21:19:24.959477  537126 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 21:19:24.959509  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:19:22.968991  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:19:22.969028  537164 machine.go:96] duration metric: took 6.276864804s to provisionDockerMachine
	I1002 21:19:22.969043  537164 start.go:293] postStartSetup for "pause-128856" (driver="kvm2")
	I1002 21:19:22.969056  537164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:19:22.969081  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:22.969508  537164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:19:22.969550  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:22.973346  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:22.973815  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:22.973846  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:22.974105  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:22.974292  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:22.974483  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:22.974646  537164 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/pause-128856/id_rsa Username:docker}
	I1002 21:19:23.069770  537164 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:19:23.075977  537164 info.go:137] Remote host: Buildroot 2025.02
	I1002 21:19:23.076010  537164 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/addons for local assets ...
	I1002 21:19:23.076082  537164 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/files for local assets ...
	I1002 21:19:23.076159  537164 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem -> 4975692.pem in /etc/ssl/certs
	I1002 21:19:23.076247  537164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:19:23.088678  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem --> /etc/ssl/certs/4975692.pem (1708 bytes)
	I1002 21:19:23.124617  537164 start.go:296] duration metric: took 155.553551ms for postStartSetup
	I1002 21:19:23.124671  537164 fix.go:56] duration metric: took 6.458578183s for fixHost
	I1002 21:19:23.124700  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:23.128630  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.129116  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:23.129148  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.129436  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:23.129721  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:23.129955  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:23.130158  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:23.130381  537164 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:23.130724  537164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.39 22 <nil> <nil>}
	I1002 21:19:23.130740  537164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 21:19:23.251349  537164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759439963.248315388
	
	I1002 21:19:23.251379  537164 fix.go:216] guest clock: 1759439963.248315388
	I1002 21:19:23.251390  537164 fix.go:229] Guest: 2025-10-02 21:19:23.248315388 +0000 UTC Remote: 2025-10-02 21:19:23.124676817 +0000 UTC m=+36.674789897 (delta=123.638571ms)
	I1002 21:19:23.251457  537164 fix.go:200] guest clock delta is within tolerance: 123.638571ms
	I1002 21:19:23.251464  537164 start.go:83] releasing machines lock for "pause-128856", held for 6.585405527s
	I1002 21:19:23.251498  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:23.251819  537164 main.go:141] libmachine: (pause-128856) Calling .GetIP
	I1002 21:19:23.256149  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.256653  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:23.256690  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.257139  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:23.257847  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:23.258041  537164 main.go:141] libmachine: (pause-128856) Calling .DriverName
	I1002 21:19:23.258142  537164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:19:23.258210  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:23.258289  537164 ssh_runner.go:195] Run: cat /version.json
	I1002 21:19:23.258301  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHHostname
	I1002 21:19:23.262923  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.263389  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:23.263446  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.263791  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.263940  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:23.264146  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:23.264353  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:23.264517  537164 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/pause-128856/id_rsa Username:docker}
	I1002 21:19:23.265476  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:23.265500  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:23.265818  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHPort
	I1002 21:19:23.266074  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHKeyPath
	I1002 21:19:23.266290  537164 main.go:141] libmachine: (pause-128856) Calling .GetSSHUsername
	I1002 21:19:23.266483  537164 sshutil.go:53] new ssh client: &{IP:192.168.39.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/pause-128856/id_rsa Username:docker}
	I1002 21:19:23.380477  537164 ssh_runner.go:195] Run: systemctl --version
	I1002 21:19:23.389280  537164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:19:23.546962  537164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:19:23.561043  537164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:19:23.561143  537164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:19:23.577463  537164 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:19:23.577504  537164 start.go:495] detecting cgroup driver to use...
	I1002 21:19:23.577587  537164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:19:23.605026  537164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:19:23.625892  537164 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:19:23.625974  537164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:19:23.648584  537164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:19:23.666721  537164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:19:23.899367  537164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:19:24.104366  537164 docker.go:234] disabling docker service ...
	I1002 21:19:24.104449  537164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:19:24.136729  537164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:19:24.155569  537164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:19:24.374974  537164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:19:24.590423  537164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:19:24.623794  537164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:19:24.655402  537164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:19:24.655495  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.669425  537164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:19:24.669525  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.685047  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.701192  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.720671  537164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:19:24.749912  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.765535  537164 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.778968  537164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:24.794533  537164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:19:24.806679  537164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:19:24.817876  537164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:19:25.084347  537164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:19:23.213574  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:23.713431  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:24.213963  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:24.713891  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:25.214273  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:25.713472  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:26.213856  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:26.713927  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:27.214237  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:27.713978  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:28.213542  536818 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:19:28.340217  536818 kubeadm.go:1113] duration metric: took 13.781130358s to wait for elevateKubeSystemPrivileges
	I1002 21:19:28.340268  536818 kubeadm.go:402] duration metric: took 25.004622377s to StartCluster
	I1002 21:19:28.340293  536818 settings.go:142] acquiring lock: {Name:mk713e1c8098ab4e764fe2cb637b0408c7b1a3ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:28.340385  536818 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 21:19:28.342291  536818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/kubeconfig: {Name:mk4bbb10e20496c232fa2a76298e716d67d36cbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:28.342613  536818 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.161 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:19:28.342703  536818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:19:28.342797  536818 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:19:28.342901  536818 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-166937"
	I1002 21:19:28.342923  536818 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-166937"
	I1002 21:19:28.342962  536818 host.go:66] Checking if "old-k8s-version-166937" exists ...
	I1002 21:19:28.342975  536818 config.go:182] Loaded profile config "old-k8s-version-166937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 21:19:28.342977  536818 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-166937"
	I1002 21:19:28.343002  536818 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-166937"
	I1002 21:19:28.343482  536818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:19:28.343525  536818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:19:28.343570  536818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:19:28.343606  536818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:19:28.347836  536818 out.go:179] * Verifying Kubernetes components...
	I1002 21:19:28.349072  536818 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:19:28.358695  536818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I1002 21:19:28.359504  536818 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:19:28.360075  536818 main.go:141] libmachine: Using API Version  1
	I1002 21:19:28.360099  536818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:19:28.360591  536818 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:19:28.360833  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetState
	I1002 21:19:28.362317  536818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I1002 21:19:28.362873  536818 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:19:28.363346  536818 main.go:141] libmachine: Using API Version  1
	I1002 21:19:28.363379  536818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:19:28.363789  536818 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:19:28.364413  536818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:19:28.364467  536818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:19:28.365327  536818 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-166937"
	I1002 21:19:28.365383  536818 host.go:66] Checking if "old-k8s-version-166937" exists ...
	I1002 21:19:28.365819  536818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:19:28.365868  536818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:19:28.378504  536818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
	I1002 21:19:28.379063  536818 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:19:28.379578  536818 main.go:141] libmachine: Using API Version  1
	I1002 21:19:28.379600  536818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:19:28.379988  536818 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:19:28.380188  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetState
	I1002 21:19:28.382466  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .DriverName
	I1002 21:19:28.383597  536818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36499
	I1002 21:19:28.384069  536818 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:19:23.553456  537395 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-852898
	
	I1002 21:19:23.553484  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:23.558895  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.559697  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:23.559805  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.560061  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHPort
	I1002 21:19:23.560362  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:23.560603  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:23.560947  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHUsername
	I1002 21:19:23.561228  537395 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:23.561511  537395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1002 21:19:23.561528  537395 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-852898' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-852898/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-852898' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:19:23.688947  537395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:19:23.688972  537395 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21682-492630/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-492630/.minikube}
	I1002 21:19:23.689019  537395 buildroot.go:174] setting up certificates
	I1002 21:19:23.689031  537395 provision.go:84] configureAuth start
	I1002 21:19:23.689052  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetMachineName
	I1002 21:19:23.689437  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetIP
	I1002 21:19:23.693506  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.693990  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:23.694024  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.694265  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:23.697108  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.697883  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:23.697902  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.698062  537395 provision.go:143] copyHostCerts
	I1002 21:19:23.698142  537395 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem, removing ...
	I1002 21:19:23.698169  537395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem
	I1002 21:19:23.698242  537395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/ca.pem (1078 bytes)
	I1002 21:19:23.698421  537395 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem, removing ...
	I1002 21:19:23.698429  537395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem
	I1002 21:19:23.698473  537395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/cert.pem (1123 bytes)
	I1002 21:19:23.698562  537395 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem, removing ...
	I1002 21:19:23.698568  537395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem
	I1002 21:19:23.698603  537395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-492630/.minikube/key.pem (1675 bytes)
	I1002 21:19:23.698686  537395 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-852898 san=[127.0.0.1 192.168.50.109 cert-expiration-852898 localhost minikube]
	I1002 21:19:23.822018  537395 provision.go:177] copyRemoteCerts
	I1002 21:19:23.822107  537395 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:19:23.822152  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:23.825793  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.826291  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:23.826330  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:23.826722  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHPort
	I1002 21:19:23.826921  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:23.827102  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHUsername
	I1002 21:19:23.827287  537395 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/cert-expiration-852898/id_rsa Username:docker}
	I1002 21:19:23.924112  537395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:19:23.962497  537395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:19:24.000516  537395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 21:19:24.034012  537395 provision.go:87] duration metric: took 344.961625ms to configureAuth
	I1002 21:19:24.034038  537395 buildroot.go:189] setting minikube options for container-runtime
	I1002 21:19:24.034312  537395 config.go:182] Loaded profile config "cert-expiration-852898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:19:24.034431  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:24.037699  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:24.038225  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:24.038244  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:24.038506  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHPort
	I1002 21:19:24.038719  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:24.038883  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:24.039017  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHUsername
	I1002 21:19:24.039201  537395 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:24.039412  537395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1002 21:19:24.039426  537395 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:19:26.815265  537126 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.855752757s)
	I1002 21:19:26.815311  537126 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1002 21:19:26.815320  537126 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.855784737s)
	I1002 21:19:26.815351  537126 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 21:19:26.815405  537126 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:19:26.815406  537126 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 21:19:26.861822  537126 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 21:19:26.861927  537126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1002 21:19:29.108163  537126 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.292667911s)
	I1002 21:19:29.108209  537126 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1002 21:19:29.108230  537126 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.246275277s)
	I1002 21:19:29.108278  537126 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1002 21:19:29.108316  537126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1002 21:19:29.108239  537126 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 21:19:29.108442  537126 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 21:19:28.384116  536818 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:19:28.384603  536818 main.go:141] libmachine: Using API Version  1
	I1002 21:19:28.384628  536818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:19:28.385019  536818 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:19:28.385317  536818 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:19:28.385335  536818 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:19:28.385355  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetSSHHostname
	I1002 21:19:28.385732  536818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:19:28.385789  536818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:19:28.389739  536818 main.go:141] libmachine: (old-k8s-version-166937) DBG | domain old-k8s-version-166937 has defined MAC address 52:54:00:6f:36:e8 in network mk-old-k8s-version-166937
	I1002 21:19:28.390472  536818 main.go:141] libmachine: (old-k8s-version-166937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:36:e8", ip: ""} in network mk-old-k8s-version-166937: {Iface:virbr4 ExpiryTime:2025-10-02 22:18:53 +0000 UTC Type:0 Mac:52:54:00:6f:36:e8 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:old-k8s-version-166937 Clientid:01:52:54:00:6f:36:e8}
	I1002 21:19:28.390510  536818 main.go:141] libmachine: (old-k8s-version-166937) DBG | domain old-k8s-version-166937 has defined IP address 192.168.72.161 and MAC address 52:54:00:6f:36:e8 in network mk-old-k8s-version-166937
	I1002 21:19:28.390808  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetSSHPort
	I1002 21:19:28.391009  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetSSHKeyPath
	I1002 21:19:28.391225  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetSSHUsername
	I1002 21:19:28.391409  536818 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/old-k8s-version-166937/id_rsa Username:docker}
	I1002 21:19:28.401340  536818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I1002 21:19:28.401887  536818 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:19:28.402361  536818 main.go:141] libmachine: Using API Version  1
	I1002 21:19:28.402382  536818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:19:28.402940  536818 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:19:28.403162  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetState
	I1002 21:19:28.405220  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .DriverName
	I1002 21:19:28.405405  536818 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:19:28.405415  536818 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:19:28.405433  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetSSHHostname
	I1002 21:19:28.409491  536818 main.go:141] libmachine: (old-k8s-version-166937) DBG | domain old-k8s-version-166937 has defined MAC address 52:54:00:6f:36:e8 in network mk-old-k8s-version-166937
	I1002 21:19:28.410095  536818 main.go:141] libmachine: (old-k8s-version-166937) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:36:e8", ip: ""} in network mk-old-k8s-version-166937: {Iface:virbr4 ExpiryTime:2025-10-02 22:18:53 +0000 UTC Type:0 Mac:52:54:00:6f:36:e8 Iaid: IPaddr:192.168.72.161 Prefix:24 Hostname:old-k8s-version-166937 Clientid:01:52:54:00:6f:36:e8}
	I1002 21:19:28.410211  536818 main.go:141] libmachine: (old-k8s-version-166937) DBG | domain old-k8s-version-166937 has defined IP address 192.168.72.161 and MAC address 52:54:00:6f:36:e8 in network mk-old-k8s-version-166937
	I1002 21:19:28.410519  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetSSHPort
	I1002 21:19:28.410715  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetSSHKeyPath
	I1002 21:19:28.410930  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .GetSSHUsername
	I1002 21:19:28.411078  536818 sshutil.go:53] new ssh client: &{IP:192.168.72.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/old-k8s-version-166937/id_rsa Username:docker}
	I1002 21:19:28.798036  536818 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:19:28.798098  536818 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:19:28.903555  536818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:19:28.943493  536818 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:19:30.253613  536818 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.455538744s)
	I1002 21:19:30.254993  536818 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-166937" to be "Ready" ...
	I1002 21:19:30.255011  536818 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.456868393s)
	I1002 21:19:30.255046  536818 start.go:976] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1002 21:19:30.267488  536818 node_ready.go:49] node "old-k8s-version-166937" is "Ready"
	I1002 21:19:30.267523  536818 node_ready.go:38] duration metric: took 12.492287ms for node "old-k8s-version-166937" to be "Ready" ...
	I1002 21:19:30.267541  536818 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:19:30.267587  536818 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:19:30.553475  536818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.649877224s)
	I1002 21:19:30.553549  536818 main.go:141] libmachine: Making call to close driver server
	I1002 21:19:30.553568  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .Close
	I1002 21:19:30.553581  536818 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.610045232s)
	I1002 21:19:30.553631  536818 main.go:141] libmachine: Making call to close driver server
	I1002 21:19:30.553644  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .Close
	I1002 21:19:30.553644  536818 api_server.go:72] duration metric: took 2.210990425s to wait for apiserver process to appear ...
	I1002 21:19:30.553656  536818 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:19:30.553676  536818 api_server.go:253] Checking apiserver healthz at https://192.168.72.161:8443/healthz ...
	I1002 21:19:30.554084  536818 main.go:141] libmachine: (old-k8s-version-166937) DBG | Closing plugin on server side
	I1002 21:19:30.554095  536818 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:19:30.554105  536818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:19:30.554114  536818 main.go:141] libmachine: Making call to close driver server
	I1002 21:19:30.554118  536818 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:19:30.554122  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .Close
	I1002 21:19:30.554126  536818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:19:30.554134  536818 main.go:141] libmachine: Making call to close driver server
	I1002 21:19:30.554140  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .Close
	I1002 21:19:30.554878  536818 main.go:141] libmachine: (old-k8s-version-166937) DBG | Closing plugin on server side
	I1002 21:19:30.554914  536818 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:19:30.554923  536818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:19:30.556401  536818 main.go:141] libmachine: (old-k8s-version-166937) DBG | Closing plugin on server side
	I1002 21:19:30.556431  536818 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:19:30.556458  536818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:19:30.574896  536818 api_server.go:279] https://192.168.72.161:8443/healthz returned 200:
	ok
	I1002 21:19:30.577473  536818 api_server.go:141] control plane version: v1.28.0
	I1002 21:19:30.577510  536818 api_server.go:131] duration metric: took 23.846211ms to wait for apiserver health ...
	I1002 21:19:30.577525  536818 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:19:30.588650  536818 system_pods.go:59] 8 kube-system pods found
	I1002 21:19:30.588700  536818 system_pods.go:61] "coredns-5dd5756b68-5qhs2" [5086fc96-9b6e-4366-9937-24c5e31ae92a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:19:30.588731  536818 system_pods.go:61] "coredns-5dd5756b68-w98m8" [0f8fe28c-b711-4fb8-b73b-0f2436fce3d2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:19:30.588739  536818 system_pods.go:61] "etcd-old-k8s-version-166937" [1ab1cdf6-6dbf-4a84-aca7-24362d20d884] Running
	I1002 21:19:30.588745  536818 system_pods.go:61] "kube-apiserver-old-k8s-version-166937" [71ef250f-f24f-4024-b827-709d5ee93c04] Running
	I1002 21:19:30.588756  536818 system_pods.go:61] "kube-controller-manager-old-k8s-version-166937" [209ef424-d286-4e27-968b-47d1087f2ed0] Running
	I1002 21:19:30.588761  536818 system_pods.go:61] "kube-proxy-jfzjn" [717dd4c7-ad94-41a1-986a-e258ce392fb8] Running
	I1002 21:19:30.588766  536818 system_pods.go:61] "kube-scheduler-old-k8s-version-166937" [f144da5e-0220-4614-a583-3c0b1a627588] Running
	I1002 21:19:30.588770  536818 system_pods.go:61] "storage-provisioner" [3062a72b-afba-41c9-a28b-688325336b9c] Pending
	I1002 21:19:30.588778  536818 system_pods.go:74] duration metric: took 11.245795ms to wait for pod list to return data ...
	I1002 21:19:30.588789  536818 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:19:30.591210  536818 main.go:141] libmachine: Making call to close driver server
	I1002 21:19:30.591233  536818 main.go:141] libmachine: (old-k8s-version-166937) Calling .Close
	I1002 21:19:30.591635  536818 main.go:141] libmachine: Successfully made call to close driver server
	I1002 21:19:30.591651  536818 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 21:19:30.592552  536818 default_sa.go:45] found service account: "default"
	I1002 21:19:30.592573  536818 default_sa.go:55] duration metric: took 3.777263ms for default service account to be created ...
	I1002 21:19:30.592584  536818 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:19:30.594164  536818 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:19:32.451786  537164 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.367383323s)
	I1002 21:19:32.451827  537164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:19:32.451890  537164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:19:32.459083  537164 start.go:563] Will wait 60s for crictl version
	I1002 21:19:32.459159  537164 ssh_runner.go:195] Run: which crictl
	I1002 21:19:32.463509  537164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 21:19:32.500119  537164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 21:19:32.500215  537164 ssh_runner.go:195] Run: crio --version
	I1002 21:19:32.531499  537164 ssh_runner.go:195] Run: crio --version
	I1002 21:19:32.563525  537164 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 21:19:30.594906  536818 addons.go:514] duration metric: took 2.252122052s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 21:19:30.597124  536818 system_pods.go:86] 8 kube-system pods found
	I1002 21:19:30.597158  536818 system_pods.go:89] "coredns-5dd5756b68-5qhs2" [5086fc96-9b6e-4366-9937-24c5e31ae92a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:19:30.597171  536818 system_pods.go:89] "coredns-5dd5756b68-w98m8" [0f8fe28c-b711-4fb8-b73b-0f2436fce3d2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:19:30.597180  536818 system_pods.go:89] "etcd-old-k8s-version-166937" [1ab1cdf6-6dbf-4a84-aca7-24362d20d884] Running
	I1002 21:19:30.597203  536818 system_pods.go:89] "kube-apiserver-old-k8s-version-166937" [71ef250f-f24f-4024-b827-709d5ee93c04] Running
	I1002 21:19:30.597215  536818 system_pods.go:89] "kube-controller-manager-old-k8s-version-166937" [209ef424-d286-4e27-968b-47d1087f2ed0] Running
	I1002 21:19:30.597222  536818 system_pods.go:89] "kube-proxy-jfzjn" [717dd4c7-ad94-41a1-986a-e258ce392fb8] Running
	I1002 21:19:30.597229  536818 system_pods.go:89] "kube-scheduler-old-k8s-version-166937" [f144da5e-0220-4614-a583-3c0b1a627588] Running
	I1002 21:19:30.597236  536818 system_pods.go:89] "storage-provisioner" [3062a72b-afba-41c9-a28b-688325336b9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:19:30.597249  536818 system_pods.go:126] duration metric: took 4.658315ms to wait for k8s-apps to be running ...
	I1002 21:19:30.597259  536818 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:19:30.597314  536818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:19:30.618017  536818 system_svc.go:56] duration metric: took 20.749451ms WaitForService to wait for kubelet
	I1002 21:19:30.618047  536818 kubeadm.go:586] duration metric: took 2.275394411s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:19:30.618072  536818 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:19:30.620856  536818 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 21:19:30.620891  536818 node_conditions.go:123] node cpu capacity is 2
	I1002 21:19:30.620907  536818 node_conditions.go:105] duration metric: took 2.829757ms to run NodePressure ...
	I1002 21:19:30.620923  536818 start.go:241] waiting for startup goroutines ...
	I1002 21:19:30.759801  536818 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-166937" context rescaled to 1 replicas
	I1002 21:19:30.759855  536818 start.go:246] waiting for cluster config update ...
	I1002 21:19:30.759875  536818 start.go:255] writing updated cluster config ...
	I1002 21:19:30.760254  536818 ssh_runner.go:195] Run: rm -f paused
	I1002 21:19:30.767397  536818 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:19:30.773515  536818 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5qhs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:30.780943  536818 pod_ready.go:94] pod "coredns-5dd5756b68-5qhs2" is "Ready"
	I1002 21:19:30.780979  536818 pod_ready.go:86] duration metric: took 7.438034ms for pod "coredns-5dd5756b68-5qhs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:30.780994  536818 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-w98m8" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:30.787305  536818 pod_ready.go:94] pod "coredns-5dd5756b68-w98m8" is "Ready"
	I1002 21:19:30.787333  536818 pod_ready.go:86] duration metric: took 6.32809ms for pod "coredns-5dd5756b68-w98m8" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:30.790525  536818 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-166937" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:30.796419  536818 pod_ready.go:94] pod "etcd-old-k8s-version-166937" is "Ready"
	I1002 21:19:30.796443  536818 pod_ready.go:86] duration metric: took 5.895144ms for pod "etcd-old-k8s-version-166937" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:30.800816  536818 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-166937" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:30.981494  536818 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-166937" is "Ready"
	I1002 21:19:30.981537  536818 pod_ready.go:86] duration metric: took 180.694771ms for pod "kube-apiserver-old-k8s-version-166937" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:31.174898  536818 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-166937" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:31.572654  536818 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-166937" is "Ready"
	I1002 21:19:31.572684  536818 pod_ready.go:86] duration metric: took 397.753796ms for pod "kube-controller-manager-old-k8s-version-166937" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:31.772415  536818 pod_ready.go:83] waiting for pod "kube-proxy-jfzjn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:32.173053  536818 pod_ready.go:94] pod "kube-proxy-jfzjn" is "Ready"
	I1002 21:19:32.173090  536818 pod_ready.go:86] duration metric: took 400.642798ms for pod "kube-proxy-jfzjn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:32.374170  536818 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-166937" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:32.773140  536818 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-166937" is "Ready"
	I1002 21:19:32.773189  536818 pod_ready.go:86] duration metric: took 398.983774ms for pod "kube-scheduler-old-k8s-version-166937" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:19:32.773217  536818 pod_ready.go:40] duration metric: took 2.005770396s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:19:32.821821  536818 start.go:623] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1002 21:19:32.822950  536818 out.go:203] 
	W1002 21:19:32.823929  536818 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1002 21:19:32.824814  536818 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1002 21:19:32.825993  536818 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-166937" cluster and "default" namespace by default
	I1002 21:19:29.676096  537395 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:19:29.676115  537395 machine.go:96] duration metric: took 6.399306682s to provisionDockerMachine
	I1002 21:19:29.676127  537395 start.go:293] postStartSetup for "cert-expiration-852898" (driver="kvm2")
	I1002 21:19:29.676140  537395 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:19:29.676161  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .DriverName
	I1002 21:19:29.676542  537395 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:19:29.676583  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:29.680229  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.680722  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:29.680743  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.681011  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHPort
	I1002 21:19:29.681230  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:29.681390  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHUsername
	I1002 21:19:29.681512  537395 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/cert-expiration-852898/id_rsa Username:docker}
	I1002 21:19:29.777698  537395 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:19:29.782845  537395 info.go:137] Remote host: Buildroot 2025.02
	I1002 21:19:29.782866  537395 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/addons for local assets ...
	I1002 21:19:29.782928  537395 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-492630/.minikube/files for local assets ...
	I1002 21:19:29.783014  537395 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem -> 4975692.pem in /etc/ssl/certs
	I1002 21:19:29.783142  537395 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:19:29.796101  537395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem --> /etc/ssl/certs/4975692.pem (1708 bytes)
	I1002 21:19:29.825695  537395 start.go:296] duration metric: took 149.551435ms for postStartSetup
	I1002 21:19:29.825752  537395 fix.go:56] duration metric: took 6.574128325s for fixHost
	I1002 21:19:29.825779  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:29.829445  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.830007  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:29.830035  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.830253  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHPort
	I1002 21:19:29.830420  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:29.830585  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:29.830695  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHUsername
	I1002 21:19:29.830873  537395 main.go:141] libmachine: Using SSH client type: native
	I1002 21:19:29.831077  537395 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1002 21:19:29.831082  537395 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 21:19:29.947614  537395 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759439969.943427398
	
	I1002 21:19:29.947632  537395 fix.go:216] guest clock: 1759439969.943427398
	I1002 21:19:29.947641  537395 fix.go:229] Guest: 2025-10-02 21:19:29.943427398 +0000 UTC Remote: 2025-10-02 21:19:29.825756023 +0000 UTC m=+31.406728115 (delta=117.671375ms)
	I1002 21:19:29.947683  537395 fix.go:200] guest clock delta is within tolerance: 117.671375ms
	I1002 21:19:29.947689  537395 start.go:83] releasing machines lock for "cert-expiration-852898", held for 6.696100692s
	I1002 21:19:29.947734  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .DriverName
	I1002 21:19:29.948063  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetIP
	I1002 21:19:29.951519  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.952010  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:29.952047  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.952254  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .DriverName
	I1002 21:19:29.952980  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .DriverName
	I1002 21:19:29.953200  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .DriverName
	I1002 21:19:29.953323  537395 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:19:29.953370  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:29.953416  537395 ssh_runner.go:195] Run: cat /version.json
	I1002 21:19:29.953436  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHHostname
	I1002 21:19:29.957814  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.958312  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:29.958329  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.958352  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.958690  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHPort
	I1002 21:19:29.958874  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:29.959036  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHUsername
	I1002 21:19:29.959192  537395 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/cert-expiration-852898/id_rsa Username:docker}
	I1002 21:19:29.959276  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:f9:15", ip: ""} in network mk-cert-expiration-852898: {Iface:virbr2 ExpiryTime:2025-10-02 22:15:34 +0000 UTC Type:0 Mac:52:54:00:c5:f9:15 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:cert-expiration-852898 Clientid:01:52:54:00:c5:f9:15}
	I1002 21:19:29.959289  537395 main.go:141] libmachine: (cert-expiration-852898) DBG | domain cert-expiration-852898 has defined IP address 192.168.50.109 and MAC address 52:54:00:c5:f9:15 in network mk-cert-expiration-852898
	I1002 21:19:29.959648  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHPort
	I1002 21:19:29.959848  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHKeyPath
	I1002 21:19:29.960008  537395 main.go:141] libmachine: (cert-expiration-852898) Calling .GetSSHUsername
	I1002 21:19:29.960136  537395 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/cert-expiration-852898/id_rsa Username:docker}
	I1002 21:19:30.046809  537395 ssh_runner.go:195] Run: systemctl --version
	I1002 21:19:30.076618  537395 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:19:30.240510  537395 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:19:30.250748  537395 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:19:30.250837  537395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:19:30.267880  537395 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:19:30.267895  537395 start.go:495] detecting cgroup driver to use...
	I1002 21:19:30.267970  537395 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:19:30.299351  537395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:19:30.317507  537395 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:19:30.317572  537395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:19:30.339251  537395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:19:30.362595  537395 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:19:30.545732  537395 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:19:30.753428  537395 docker.go:234] disabling docker service ...
	I1002 21:19:30.753501  537395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:19:30.793469  537395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:19:30.810532  537395 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:19:31.074597  537395 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:19:31.266047  537395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:19:31.283422  537395 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:19:31.308447  537395 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:19:31.308508  537395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:31.321750  537395 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:19:31.321810  537395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:31.334359  537395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:31.346359  537395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:31.358601  537395 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:19:31.372655  537395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:31.385897  537395 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:31.399451  537395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:19:31.412115  537395 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:19:31.423122  537395 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:19:31.434695  537395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:19:31.615344  537395 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:19:31.518527  537126 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.410050473s)
	I1002 21:19:31.518559  537126 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-492630/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1002 21:19:31.518603  537126 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1002 21:19:31.518654  537126 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1002 21:19:32.564421  537164 main.go:141] libmachine: (pause-128856) Calling .GetIP
	I1002 21:19:32.567331  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:32.567864  537164 main.go:141] libmachine: (pause-128856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:57:d4", ip: ""} in network mk-pause-128856: {Iface:virbr3 ExpiryTime:2025-10-02 22:17:40 +0000 UTC Type:0 Mac:52:54:00:3e:57:d4 Iaid: IPaddr:192.168.39.39 Prefix:24 Hostname:pause-128856 Clientid:01:52:54:00:3e:57:d4}
	I1002 21:19:32.567896  537164 main.go:141] libmachine: (pause-128856) DBG | domain pause-128856 has defined IP address 192.168.39.39 and MAC address 52:54:00:3e:57:d4 in network mk-pause-128856
	I1002 21:19:32.568212  537164 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 21:19:32.573154  537164 kubeadm.go:883] updating cluster {Name:pause-128856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-128856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:19:32.573385  537164 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:19:32.573457  537164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:19:32.617677  537164 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:19:32.617725  537164 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:19:32.617801  537164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:19:32.654217  537164 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:19:32.654249  537164 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:19:32.654259  537164 kubeadm.go:934] updating node { 192.168.39.39 8443 v1.34.1 crio true true} ...
	I1002 21:19:32.654391  537164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-128856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-128856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:19:32.654530  537164 ssh_runner.go:195] Run: crio config
	I1002 21:19:32.704437  537164 cni.go:84] Creating CNI manager for ""
	I1002 21:19:32.704471  537164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:19:32.704493  537164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:19:32.704523  537164 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.39 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-128856 NodeName:pause-128856 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:19:32.704693  537164 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-128856"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.39"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.39"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:19:32.704794  537164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:19:32.719385  537164 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:19:32.719478  537164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:19:32.732745  537164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1002 21:19:32.753827  537164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:19:32.776821  537164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1002 21:19:32.802949  537164 ssh_runner.go:195] Run: grep 192.168.39.39	control-plane.minikube.internal$ /etc/hosts
	I1002 21:19:32.808080  537164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:19:33.025419  537164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:19:33.052792  537164 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856 for IP: 192.168.39.39
	I1002 21:19:33.052822  537164 certs.go:195] generating shared ca certs ...
	I1002 21:19:33.052853  537164 certs.go:227] acquiring lock for ca certs: {Name:mk99bb18e623cf4cf4a4efda3dab88668aa481a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:33.053073  537164 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key
	I1002 21:19:33.053136  537164 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key
	I1002 21:19:33.053148  537164 certs.go:257] generating profile certs ...
	I1002 21:19:33.053289  537164 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/client.key
	I1002 21:19:33.053374  537164 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/apiserver.key.33b8e485
	I1002 21:19:33.053438  537164 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/proxy-client.key
	I1002 21:19:33.053555  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569.pem (1338 bytes)
	W1002 21:19:33.053582  537164 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569_empty.pem, impossibly tiny 0 bytes
	I1002 21:19:33.053590  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:19:33.053666  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:19:33.053718  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:19:33.053754  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/certs/key.pem (1675 bytes)
	I1002 21:19:33.053813  537164 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem (1708 bytes)
	I1002 21:19:33.054904  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:19:33.086355  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:19:33.119310  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:19:33.156405  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:19:33.190404  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 21:19:33.226635  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:19:33.267389  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:19:33.303795  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/pause-128856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:19:33.348342  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/ssl/certs/4975692.pem --> /usr/share/ca-certificates/4975692.pem (1708 bytes)
	I1002 21:19:33.403584  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:19:33.452816  537164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-492630/.minikube/certs/497569.pem --> /usr/share/ca-certificates/497569.pem (1338 bytes)
	I1002 21:19:33.492692  537164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:19:33.524393  537164 ssh_runner.go:195] Run: openssl version
	I1002 21:19:33.533803  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4975692.pem && ln -fs /usr/share/ca-certificates/4975692.pem /etc/ssl/certs/4975692.pem"
	I1002 21:19:33.554059  537164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4975692.pem
	I1002 21:19:33.562341  537164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:27 /usr/share/ca-certificates/4975692.pem
	I1002 21:19:33.562459  537164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4975692.pem
	I1002 21:19:33.572092  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4975692.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:19:33.588683  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:19:33.605965  537164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:19:33.613256  537164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:19 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:19:33.613339  537164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:19:33.623263  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:19:33.638525  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/497569.pem && ln -fs /usr/share/ca-certificates/497569.pem /etc/ssl/certs/497569.pem"
	I1002 21:19:33.655792  537164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/497569.pem
	I1002 21:19:33.662768  537164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:27 /usr/share/ca-certificates/497569.pem
	I1002 21:19:33.662833  537164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/497569.pem
	I1002 21:19:33.671727  537164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/497569.pem /etc/ssl/certs/51391683.0"
	I1002 21:19:33.683405  537164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:19:33.689075  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:19:33.699191  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:19:33.708429  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:19:33.715982  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:19:33.725718  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:19:33.732883  537164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:19:33.742863  537164 kubeadm.go:400] StartCluster: {Name:pause-128856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-128856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.39 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:19:33.743025  537164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:19:33.743110  537164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:19:33.794920  537164 cri.go:89] found id: "0571b181224eed714f4f038aec0278668547fcc7a7a6bf02fd4fdbed33a4efde"
	I1002 21:19:33.794946  537164 cri.go:89] found id: "9d31f80733a3cc5ddd857f757c41cd2aa67b47084e1f991bfa8ea3b998fc0799"
	I1002 21:19:33.794953  537164 cri.go:89] found id: "11c83cdfc01723ef7e45b3510f1e200c5a4ab1167826f9a1c2fbc3b463993059"
	I1002 21:19:33.794957  537164 cri.go:89] found id: "4f4612b1df9269b91b676f8fbba243c1bbedff79a13ef12670c064228daf6327"
	I1002 21:19:33.794961  537164 cri.go:89] found id: "05387411e6ed3c96e79e0122ad74634891c4e42e18758ffb12ade2efa81ea15d"
	I1002 21:19:33.794966  537164 cri.go:89] found id: "11567d5c6ef86dfb46f79bbc6ffabddf97b216eb14a9f66cf90db5331ce637ed"
	I1002 21:19:33.794984  537164 cri.go:89] found id: ""
	I1002 21:19:33.795034  537164 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-128856 -n pause-128856
helpers_test.go:269: (dbg) Run:  kubectl --context pause-128856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-128856 -n pause-128856
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-128856 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-128856 logs -n 25: (1.548104736s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────
────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                  ARGS                                                                                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────
────────┼─────────────────────┤
	│ delete  │ -p cert-options-664739                                                                                                                                                                                                                                                  │ cert-options-664739       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ ssh     │ -p NoKubernetes-685644 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                                                 │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │                     │
	│ stop    │ -p NoKubernetes-685644                                                                                                                                                                                                                                                  │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ start   │ -p NoKubernetes-685644 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                              │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:17 UTC │
	│ start   │ -p stopped-upgrade-391687 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                          │ stopped-upgrade-391687    │ jenkins │ v1.32.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:17 UTC │
	│ ssh     │ -p NoKubernetes-685644 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                                                 │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │                     │
	│ delete  │ -p NoKubernetes-685644                                                                                                                                                                                                                                                  │ NoKubernetes-685644       │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:17 UTC │
	│ stop    │ -p kubernetes-upgrade-238376                                                                                                                                                                                                                                            │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:17 UTC │
	│ start   │ -p pause-128856 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ pause-128856              │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:18 UTC │
	│ start   │ -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                      │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:18 UTC │
	│ stop    │ stopped-upgrade-391687 stop                                                                                                                                                                                                                                             │ stopped-upgrade-391687    │ jenkins │ v1.32.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:17 UTC │
	│ start   │ -p stopped-upgrade-391687 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                      │ stopped-upgrade-391687    │ jenkins │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:18 UTC │
	│ start   │ -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                             │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │                     │
	│ start   │ -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                      │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:18 UTC │
	│ delete  │ -p kubernetes-upgrade-238376                                                                                                                                                                                                                                            │ kubernetes-upgrade-238376 │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:18 UTC │
	│ start   │ -p old-k8s-version-166937 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0 │ old-k8s-version-166937    │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:19 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-391687 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                                             │ stopped-upgrade-391687    │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │                     │
	│ delete  │ -p stopped-upgrade-391687                                                                                                                                                                                                                                               │ stopped-upgrade-391687    │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:18 UTC │
	│ start   │ -p no-preload-397715 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                                       │ no-preload-397715         │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │                     │
	│ start   │ -p pause-128856 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                              │ pause-128856              │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:19 UTC │
	│ start   │ -p cert-expiration-852898 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                     │ cert-expiration-852898    │ jenkins │ v1.37.0 │ 02 Oct 25 21:18 UTC │ 02 Oct 25 21:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-166937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                            │ old-k8s-version-166937    │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │ 02 Oct 25 21:19 UTC │
	│ stop    │ -p old-k8s-version-166937 --alsologtostderr -v=3                                                                                                                                                                                                                        │ old-k8s-version-166937    │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ delete  │ -p cert-expiration-852898                                                                                                                                                                                                                                               │ cert-expiration-852898    │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │ 02 Oct 25 21:19 UTC │
	│ start   │ -p embed-certs-296193 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1                                                                                        │ embed-certs-296193        │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────
────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:19:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:19:59.698616  538091 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:19:59.698970  538091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:19:59.698983  538091 out.go:374] Setting ErrFile to fd 2...
	I1002 21:19:59.698990  538091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:19:59.699282  538091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 21:19:59.700402  538091 out.go:368] Setting JSON to false
	I1002 21:19:59.701619  538091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7335,"bootTime":1759432665,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:19:59.701733  538091 start.go:140] virtualization: kvm guest
	I1002 21:19:59.704410  538091 out.go:179] * [embed-certs-296193] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:19:59.705718  538091 notify.go:220] Checking for updates...
	I1002 21:19:59.705747  538091 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:19:59.706770  538091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:19:59.707889  538091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 21:19:59.708873  538091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 21:19:59.710002  538091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:19:59.710970  538091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:19:59.712429  538091 config.go:182] Loaded profile config "no-preload-397715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:19:59.712580  538091 config.go:182] Loaded profile config "old-k8s-version-166937": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 21:19:59.712756  538091 config.go:182] Loaded profile config "pause-128856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:19:59.712866  538091 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:19:59.751240  538091 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 21:19:59.752142  538091 start.go:304] selected driver: kvm2
	I1002 21:19:59.752161  538091 start.go:924] validating driver "kvm2" against <nil>
	I1002 21:19:59.752180  538091 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:19:59.753133  538091 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:19:59.753244  538091 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 21:19:59.768172  538091 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 21:19:59.768199  538091 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 21:19:59.783828  538091 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 21:19:59.783865  538091 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:19:59.784155  538091 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:19:59.784188  538091 cni.go:84] Creating CNI manager for ""
	I1002 21:19:59.784238  538091 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 21:19:59.784250  538091 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 21:19:59.784298  538091 start.go:348] cluster config:
	{Name:embed-certs-296193 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-296193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:19:59.784424  538091 iso.go:125] acquiring lock: {Name:mk7586bb79dc7f44da54ee16895643204aac50ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:19:59.785753  538091 out.go:179] * Starting "embed-certs-296193" primary control-plane node in "embed-certs-296193" cluster
	I1002 21:19:59.786969  538091 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:19:59.787010  538091 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:19:59.787021  538091 cache.go:58] Caching tarball of preloaded images
	I1002 21:19:59.787152  538091 preload.go:233] Found /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:19:59.787169  538091 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:19:59.787286  538091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/embed-certs-296193/config.json ...
	I1002 21:19:59.787313  538091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/embed-certs-296193/config.json: {Name:mk3c8e4c2c42956527a9549880178a7c7f2e65d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:19:59.787478  538091 start.go:360] acquireMachinesLock for embed-certs-296193: {Name:mk9e7957cdce1fd4b26ce430105927ec465bcae0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 21:19:59.787532  538091 start.go:364] duration metric: took 33.378µs to acquireMachinesLock for "embed-certs-296193"
	I1002 21:19:59.787558  538091 start.go:93] Provisioning new machine with config: &{Name:embed-certs-296193 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.1 ClusterName:embed-certs-296193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:19:59.787622  538091 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.190907355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759440000190880592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ca67249-bf6e-484b-bae6-a6b5ea317cc2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.191463475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23d74cd0-7695-401c-aabf-dadbb595d98b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.191536631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23d74cd0-7695-401c-aabf-dadbb595d98b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.191952801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94750222bc14077ef44b61ac08e51cc485d3a2d084a15eadf5b34be24e1e5f98,PodSandboxId:3fd11101f45c00058e3a3d815542e3d2de9e13efeaac2b0b9f23764addb872b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759439983103048577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mvmzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2491b3b8-f01b-4333-8304-06a5dd7afc8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d6e2c5a0d1b7a7f238a790a9e7f52002034351cfd5d9a44b96b62f5acf3b7f,PodSandboxId:35fd09fe0c4c1d8a294c4c546df23ac0d6914b4315080e4ea38ee4eb1ea6d88b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759439982816558028,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxs4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52d68123-da86-4724-899d-ab3a24722a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71db838a4f9acc481f8110356b8ca4103f846bdeff69f6a77419e9cd8fb5b8fb,PodSandboxId:939c3f170d7aef5c5cd50ff2e8065bd80a551382ad7a2bfe704733d4dfc09868,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759439978140581485,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c639b91b2cf59d22360035ec9abdc0c9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ea2f671499f3b49282348893e6ef436005ce83302d44ad9bb75120f294e565,PodSandboxId:b1949d493645c898d129d160d25ef1be68c2e63a9c8789e5b6ffe4ec0714966d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc
97,State:CONTAINER_RUNNING,CreatedAt:1759439978123510628,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8079a1a8223ca8912208dc921ef1c627,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c36437eef0a56439b31247733d8f3c8df89fcd147571f10b181bdc7ac527f5c6,PodSandboxId:70e03229063946bd53a1ef89468b7f3f94a2a1ab6ae7976a9466e33e9c5e4b7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759439978104915530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823edb8d463d6a30dc8bc5fbc81dfbb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aeb7be2b9b11725c458d3dc5c0a3eaf5b3d997a97813ba94d7b3a9e2e525643,PodSandboxId:6371d7f8447c7e396177712f1790631e664b5a893e4f1ef60a49e41126945195,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759439978110425691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991be3b53d43e7883f0dec6de135fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0571b181224eed714f4f038aec0278668547fcc7a7a6bf02fd4fdbed33a4efde,PodSandboxId:65c03197ce0d6f2425645bbc76155503c07c2f4f047b52
0ffa6b622840481817,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759439890369168570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mvmzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2491b3b8-f01b-4333-8304-06a5dd7afc8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d31f80733a3cc5ddd857f757c41cd2aa67b47084e1f991bfa8ea3b998fc0799,PodSandboxId:a2fe7ef013c66cadf5bb056b31ef32035909eee978db7d7d428ea41d0fbb10b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759439889211442811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxs4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52d68123-da86-4724-899d-ab3a24722a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c83cdfc01723ef7e45b3510f1e200c5a4ab1167826f9a1c2fbc3b463993059,PodSandboxId:f012a1d5bb49b2b86d4828d01ffd801006c21de8289da171196bf2497af3a4df,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759439876852857767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991be3b53d43e7883f0dec6de135fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05387411e6ed3c96e79e0122ad74634891c4e42e18758ffb12ade2efa81ea15d,PodSandboxId:d1f73e0d74e57dd24c01e4e013e8684b66840214de39e3e7d57c7e51e122e222,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759439876837229606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c639b91b2cf59d22360035ec9abdc0c9,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4612b1df9269b91b676f8fbba243c1bbedff79a13ef12670c064228daf6327,PodSandboxId:e6f49421847c0e7eae05f126ba0475db3e77262aa9ff057ea87b1a74adc38644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759439876848085456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-128856,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 823edb8d463d6a30dc8bc5fbc81dfbb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11567d5c6ef86dfb46f79bbc6ffabddf97b216eb14a9f66cf90db5331ce637ed,PodSandboxId:6c44d6fb9ac10484d272329fb3143b7473e2f389db361a3a89db137d78709045,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759439876796686729,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8079a1a8223ca8912208dc921ef1c627,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23d74cd0-7695-401c-aabf-dadbb595d98b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.242578563Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76c91f31-6a9d-449e-a300-dcdc5ba01bdb name=/runtime.v1.RuntimeService/Version
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.242648526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76c91f31-6a9d-449e-a300-dcdc5ba01bdb name=/runtime.v1.RuntimeService/Version
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.243956776Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23132baa-0f74-4f11-81e1-3d445690df06 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.244742155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759440000244719741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23132baa-0f74-4f11-81e1-3d445690df06 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.245369337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c6385e3-7d12-4e43-8a00-1e87b28ada62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.245419785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c6385e3-7d12-4e43-8a00-1e87b28ada62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.245639388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94750222bc14077ef44b61ac08e51cc485d3a2d084a15eadf5b34be24e1e5f98,PodSandboxId:3fd11101f45c00058e3a3d815542e3d2de9e13efeaac2b0b9f23764addb872b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759439983103048577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mvmzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2491b3b8-f01b-4333-8304-06a5dd7afc8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d6e2c5a0d1b7a7f238a790a9e7f52002034351cfd5d9a44b96b62f5acf3b7f,PodSandboxId:35fd09fe0c4c1d8a294c4c546df23ac0d6914b4315080e4ea38ee4eb1ea6d88b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759439982816558028,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxs4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52d68123-da86-4724-899d-ab3a24722a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71db838a4f9acc481f8110356b8ca4103f846bdeff69f6a77419e9cd8fb5b8fb,PodSandboxId:939c3f170d7aef5c5cd50ff2e8065bd80a551382ad7a2bfe704733d4dfc09868,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759439978140581485,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c639b91b2cf59d22360035ec9abdc0c9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ea2f671499f3b49282348893e6ef436005ce83302d44ad9bb75120f294e565,PodSandboxId:b1949d493645c898d129d160d25ef1be68c2e63a9c8789e5b6ffe4ec0714966d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc
97,State:CONTAINER_RUNNING,CreatedAt:1759439978123510628,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8079a1a8223ca8912208dc921ef1c627,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c36437eef0a56439b31247733d8f3c8df89fcd147571f10b181bdc7ac527f5c6,PodSandboxId:70e03229063946bd53a1ef89468b7f3f94a2a1ab6ae7976a9466e33e9c5e4b7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759439978104915530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823edb8d463d6a30dc8bc5fbc81dfbb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aeb7be2b9b11725c458d3dc5c0a3eaf5b3d997a97813ba94d7b3a9e2e525643,PodSandboxId:6371d7f8447c7e396177712f1790631e664b5a893e4f1ef60a49e41126945195,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759439978110425691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991be3b53d43e7883f0dec6de135fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0571b181224eed714f4f038aec0278668547fcc7a7a6bf02fd4fdbed33a4efde,PodSandboxId:65c03197ce0d6f2425645bbc76155503c07c2f4f047b52
0ffa6b622840481817,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759439890369168570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mvmzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2491b3b8-f01b-4333-8304-06a5dd7afc8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d31f80733a3cc5ddd857f757c41cd2aa67b47084e1f991bfa8ea3b998fc0799,PodSandboxId:a2fe7ef013c66cadf5bb056b31ef32035909eee978db7d7d428ea41d0fbb10b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759439889211442811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxs4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52d68123-da86-4724-899d-ab3a24722a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c83cdfc01723ef7e45b3510f1e200c5a4ab1167826f9a1c2fbc3b463993059,PodSandboxId:f012a1d5bb49b2b86d4828d01ffd801006c21de8289da171196bf2497af3a4df,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759439876852857767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991be3b53d43e7883f0dec6de135fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05387411e6ed3c96e79e0122ad74634891c4e42e18758ffb12ade2efa81ea15d,PodSandboxId:d1f73e0d74e57dd24c01e4e013e8684b66840214de39e3e7d57c7e51e122e222,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759439876837229606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c639b91b2cf59d22360035ec9abdc0c9,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4612b1df9269b91b676f8fbba243c1bbedff79a13ef12670c064228daf6327,PodSandboxId:e6f49421847c0e7eae05f126ba0475db3e77262aa9ff057ea87b1a74adc38644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759439876848085456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-128856,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 823edb8d463d6a30dc8bc5fbc81dfbb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11567d5c6ef86dfb46f79bbc6ffabddf97b216eb14a9f66cf90db5331ce637ed,PodSandboxId:6c44d6fb9ac10484d272329fb3143b7473e2f389db361a3a89db137d78709045,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759439876796686729,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8079a1a8223ca8912208dc921ef1c627,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c6385e3-7d12-4e43-8a00-1e87b28ada62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.299648473Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f46052f0-91e3-4bb5-aa9e-b7b3b93eece7 name=/runtime.v1.RuntimeService/Version
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.299811438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f46052f0-91e3-4bb5-aa9e-b7b3b93eece7 name=/runtime.v1.RuntimeService/Version
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.301500083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d22b18c-4ee7-479e-bc88-658d0d177855 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.302192594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759440000302159513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d22b18c-4ee7-479e-bc88-658d0d177855 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.303099728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af8fad36-3f9d-417b-a323-1027a4734b40 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.303167375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af8fad36-3f9d-417b-a323-1027a4734b40 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.303976847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94750222bc14077ef44b61ac08e51cc485d3a2d084a15eadf5b34be24e1e5f98,PodSandboxId:3fd11101f45c00058e3a3d815542e3d2de9e13efeaac2b0b9f23764addb872b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759439983103048577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mvmzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2491b3b8-f01b-4333-8304-06a5dd7afc8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d6e2c5a0d1b7a7f238a790a9e7f52002034351cfd5d9a44b96b62f5acf3b7f,PodSandboxId:35fd09fe0c4c1d8a294c4c546df23ac0d6914b4315080e4ea38ee4eb1ea6d88b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759439982816558028,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxs4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52d68123-da86-4724-899d-ab3a24722a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71db838a4f9acc481f8110356b8ca4103f846bdeff69f6a77419e9cd8fb5b8fb,PodSandboxId:939c3f170d7aef5c5cd50ff2e8065bd80a551382ad7a2bfe704733d4dfc09868,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759439978140581485,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c639b91b2cf59d22360035ec9abdc0c9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ea2f671499f3b49282348893e6ef436005ce83302d44ad9bb75120f294e565,PodSandboxId:b1949d493645c898d129d160d25ef1be68c2e63a9c8789e5b6ffe4ec0714966d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc
97,State:CONTAINER_RUNNING,CreatedAt:1759439978123510628,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8079a1a8223ca8912208dc921ef1c627,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c36437eef0a56439b31247733d8f3c8df89fcd147571f10b181bdc7ac527f5c6,PodSandboxId:70e03229063946bd53a1ef89468b7f3f94a2a1ab6ae7976a9466e33e9c5e4b7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759439978104915530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823edb8d463d6a30dc8bc5fbc81dfbb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aeb7be2b9b11725c458d3dc5c0a3eaf5b3d997a97813ba94d7b3a9e2e525643,PodSandboxId:6371d7f8447c7e396177712f1790631e664b5a893e4f1ef60a49e41126945195,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759439978110425691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991be3b53d43e7883f0dec6de135fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0571b181224eed714f4f038aec0278668547fcc7a7a6bf02fd4fdbed33a4efde,PodSandboxId:65c03197ce0d6f2425645bbc76155503c07c2f4f047b52
0ffa6b622840481817,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759439890369168570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mvmzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2491b3b8-f01b-4333-8304-06a5dd7afc8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d31f80733a3cc5ddd857f757c41cd2aa67b47084e1f991bfa8ea3b998fc0799,PodSandboxId:a2fe7ef013c66cadf5bb056b31ef32035909eee978db7d7d428ea41d0fbb10b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759439889211442811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxs4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52d68123-da86-4724-899d-ab3a24722a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c83cdfc01723ef7e45b3510f1e200c5a4ab1167826f9a1c2fbc3b463993059,PodSandboxId:f012a1d5bb49b2b86d4828d01ffd801006c21de8289da171196bf2497af3a4df,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759439876852857767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991be3b53d43e7883f0dec6de135fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05387411e6ed3c96e79e0122ad74634891c4e42e18758ffb12ade2efa81ea15d,PodSandboxId:d1f73e0d74e57dd24c01e4e013e8684b66840214de39e3e7d57c7e51e122e222,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759439876837229606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c639b91b2cf59d22360035ec9abdc0c9,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4612b1df9269b91b676f8fbba243c1bbedff79a13ef12670c064228daf6327,PodSandboxId:e6f49421847c0e7eae05f126ba0475db3e77262aa9ff057ea87b1a74adc38644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759439876848085456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-128856,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 823edb8d463d6a30dc8bc5fbc81dfbb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11567d5c6ef86dfb46f79bbc6ffabddf97b216eb14a9f66cf90db5331ce637ed,PodSandboxId:6c44d6fb9ac10484d272329fb3143b7473e2f389db361a3a89db137d78709045,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759439876796686729,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8079a1a8223ca8912208dc921ef1c627,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af8fad36-3f9d-417b-a323-1027a4734b40 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.349040588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70793a80-6568-418b-ac8c-169c2590e9b1 name=/runtime.v1.RuntimeService/Version
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.349141782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70793a80-6568-418b-ac8c-169c2590e9b1 name=/runtime.v1.RuntimeService/Version
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.351397998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92ab7d01-9c98-4a30-b309-00fd1d25c9d8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.351768944Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759440000351748838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92ab7d01-9c98-4a30-b309-00fd1d25c9d8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.352605773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=913e20ad-15c8-4778-9ce4-3faeada44136 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.352813398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=913e20ad-15c8-4778-9ce4-3faeada44136 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 21:20:00 pause-128856 crio[2786]: time="2025-10-02 21:20:00.353387177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94750222bc14077ef44b61ac08e51cc485d3a2d084a15eadf5b34be24e1e5f98,PodSandboxId:3fd11101f45c00058e3a3d815542e3d2de9e13efeaac2b0b9f23764addb872b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759439983103048577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mvmzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2491b3b8-f01b-4333-8304-06a5dd7afc8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11d6e2c5a0d1b7a7f238a790a9e7f52002034351cfd5d9a44b96b62f5acf3b7f,PodSandboxId:35fd09fe0c4c1d8a294c4c546df23ac0d6914b4315080e4ea38ee4eb1ea6d88b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759439982816558028,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxs4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52d68123-da86-4724-899d-ab3a24722a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71db838a4f9acc481f8110356b8ca4103f846bdeff69f6a77419e9cd8fb5b8fb,PodSandboxId:939c3f170d7aef5c5cd50ff2e8065bd80a551382ad7a2bfe704733d4dfc09868,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759439978140581485,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c639b91b2cf59d22360035ec9abdc0c9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ea2f671499f3b49282348893e6ef436005ce83302d44ad9bb75120f294e565,PodSandboxId:b1949d493645c898d129d160d25ef1be68c2e63a9c8789e5b6ffe4ec0714966d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc
97,State:CONTAINER_RUNNING,CreatedAt:1759439978123510628,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8079a1a8223ca8912208dc921ef1c627,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c36437eef0a56439b31247733d8f3c8df89fcd147571f10b181bdc7ac527f5c6,PodSandboxId:70e03229063946bd53a1ef89468b7f3f94a2a1ab6ae7976a9466e33e9c5e4b7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759439978104915530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823edb8d463d6a30dc8bc5fbc81dfbb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aeb7be2b9b11725c458d3dc5c0a3eaf5b3d997a97813ba94d7b3a9e2e525643,PodSandboxId:6371d7f8447c7e396177712f1790631e664b5a893e4f1ef60a49e41126945195,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759439978110425691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991be3b53d43e7883f0dec6de135fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0571b181224eed714f4f038aec0278668547fcc7a7a6bf02fd4fdbed33a4efde,PodSandboxId:65c03197ce0d6f2425645bbc76155503c07c2f4f047b52
0ffa6b622840481817,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759439890369168570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mvmzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2491b3b8-f01b-4333-8304-06a5dd7afc8c,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d31f80733a3cc5ddd857f757c41cd2aa67b47084e1f991bfa8ea3b998fc0799,PodSandboxId:a2fe7ef013c66cadf5bb056b31ef32035909eee978db7d7d428ea41d0fbb10b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759439889211442811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jxs4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52d68123-da86-4724-899d-ab3a24722a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c83cdfc01723ef7e45b3510f1e200c5a4ab1167826f9a1c2fbc3b463993059,PodSandboxId:f012a1d5bb49b2b86d4828d01ffd801006c21de8289da171196bf2497af3a4df,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759439876852857767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991be3b53d43e7883f0dec6de135fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05387411e6ed3c96e79e0122ad74634891c4e42e18758ffb12ade2efa81ea15d,PodSandboxId:d1f73e0d74e57dd24c01e4e013e8684b66840214de39e3e7d57c7e51e122e222,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759439876837229606,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c639b91b2cf59d22360035ec9abdc0c9,},Annotations:map[string]string{io.kubernetes.container.hash:
af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4612b1df9269b91b676f8fbba243c1bbedff79a13ef12670c064228daf6327,PodSandboxId:e6f49421847c0e7eae05f126ba0475db3e77262aa9ff057ea87b1a74adc38644,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759439876848085456,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-128856,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 823edb8d463d6a30dc8bc5fbc81dfbb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11567d5c6ef86dfb46f79bbc6ffabddf97b216eb14a9f66cf90db5331ce637ed,PodSandboxId:6c44d6fb9ac10484d272329fb3143b7473e2f389db361a3a89db137d78709045,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759439876796686729,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-128856,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8079a1a8223ca8912208dc921ef1c627,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=913e20ad-15c8-4778-9ce4-3faeada44136 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	94750222bc140       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago       Running             coredns                   1                   3fd11101f45c0       coredns-66bc5c9577-mvmzc
	11d6e2c5a0d1b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   17 seconds ago       Running             kube-proxy                1                   35fd09fe0c4c1       kube-proxy-jxs4h
	71db838a4f9ac       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   22 seconds ago       Running             kube-scheduler            1                   939c3f170d7ae       kube-scheduler-pause-128856
	d7ea2f671499f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   22 seconds ago       Running             kube-apiserver            1                   b1949d493645c       kube-apiserver-pause-128856
	3aeb7be2b9b11       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   22 seconds ago       Running             etcd                      1                   6371d7f8447c7       etcd-pause-128856
	c36437eef0a56       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   22 seconds ago       Running             kube-controller-manager   1                   70e0322906394       kube-controller-manager-pause-128856
	0571b181224ee       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   65c03197ce0d6       coredns-66bc5c9577-mvmzc
	9d31f80733a3c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   a2fe7ef013c66       kube-proxy-jxs4h
	11c83cdfc0172       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   2 minutes ago        Exited              etcd                      0                   f012a1d5bb49b       etcd-pause-128856
	4f4612b1df926       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   2 minutes ago        Exited              kube-controller-manager   0                   e6f49421847c0       kube-controller-manager-pause-128856
	05387411e6ed3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   2 minutes ago        Exited              kube-scheduler            0                   d1f73e0d74e57       kube-scheduler-pause-128856
	11567d5c6ef86       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   2 minutes ago        Exited              kube-apiserver            0                   6c44d6fb9ac10       kube-apiserver-pause-128856
	
	
	==> coredns [0571b181224eed714f4f038aec0278668547fcc7a7a6bf02fd4fdbed33a4efde] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52027 - 6693 "HINFO IN 2879463783731746000.9112641622917126522. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025959818s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [94750222bc14077ef44b61ac08e51cc485d3a2d084a15eadf5b34be24e1e5f98] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56089 - 34469 "HINFO IN 5902819435872272640.8911324835023027328. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.040938463s
	
	
	==> describe nodes <==
	Name:               pause-128856
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-128856
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=pause-128856
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_18_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:17:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-128856
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:19:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:19:41 +0000   Thu, 02 Oct 2025 21:17:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:19:41 +0000   Thu, 02 Oct 2025 21:17:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:19:41 +0000   Thu, 02 Oct 2025 21:17:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:19:41 +0000   Thu, 02 Oct 2025 21:18:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    pause-128856
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e9d5acdd7e54f1aae6d8b6e3b973705
	  System UUID:                6e9d5acd-d7e5-4f1a-ae6d-8b6e3b973705
	  Boot ID:                    44ccb4aa-8eb8-4252-a6eb-bacf8333381d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mvmzc                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     113s
	  kube-system                 etcd-pause-128856                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         118s
	  kube-system                 kube-apiserver-pause-128856             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-pause-128856    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-jxs4h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-pause-128856             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     118s               kubelet          Node pause-128856 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node pause-128856 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node pause-128856 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  NodeReady                117s               kubelet          Node pause-128856 status is now: NodeReady
	  Normal  RegisteredNode           114s               node-controller  Node pause-128856 event: Registered Node pause-128856 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-128856 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-128856 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-128856 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-128856 event: Registered Node pause-128856 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:17] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000033] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007691] (rpcbind)[122]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.177578] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000046] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088816] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.100944] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.098716] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 2 21:18] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.773391] kauditd_printk_skb: 18 callbacks suppressed
	[ +33.832150] kauditd_printk_skb: 190 callbacks suppressed
	[Oct 2 21:19] kauditd_printk_skb: 190 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 154 callbacks suppressed
	
	
	==> etcd [11c83cdfc01723ef7e45b3510f1e200c5a4ab1167826f9a1c2fbc3b463993059] <==
	{"level":"warn","ts":"2025-10-02T21:18:10.026963Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.683709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T21:18:10.026989Z","caller":"traceutil/trace.go:172","msg":"trace[862240287] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:359; }","duration":"198.711625ms","start":"2025-10-02T21:18:09.828269Z","end":"2025-10-02T21:18:10.026981Z","steps":["trace[862240287] 'agreement among raft nodes before linearized reading'  (duration: 198.658368ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T21:18:31.432352Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.314774ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13514626998440572469 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.39\" mod_revision:389 > success:<request_put:<key:\"/registry/masterleases/192.168.39.39\" value_size:66 lease:4291254961585796659 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.39\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-02T21:18:31.432522Z","caller":"traceutil/trace.go:172","msg":"trace[942860357] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"252.152733ms","start":"2025-10-02T21:18:31.180357Z","end":"2025-10-02T21:18:31.432510Z","steps":["trace[942860357] 'process raft request'  (duration: 123.600303ms)","trace[942860357] 'compare'  (duration: 128.179549ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T21:19:11.406890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.127901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T21:19:11.406965Z","caller":"traceutil/trace.go:172","msg":"trace[786735479] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:410; }","duration":"106.205507ms","start":"2025-10-02T21:19:11.300737Z","end":"2025-10-02T21:19:11.406943Z","steps":["trace[786735479] 'agreement among raft nodes before linearized reading'  (duration: 37.85496ms)","trace[786735479] 'range keys from in-memory index tree'  (duration: 68.251418ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T21:19:11.407597Z","caller":"traceutil/trace.go:172","msg":"trace[1449324272] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"133.035983ms","start":"2025-10-02T21:19:11.274547Z","end":"2025-10-02T21:19:11.407583Z","steps":["trace[1449324272] 'process raft request'  (duration: 64.085147ms)","trace[1449324272] 'compare'  (duration: 68.160375ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T21:19:17.558768Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T21:19:17.558996Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-128856","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.39:2380"],"advertise-client-urls":["https://192.168.39.39:2379"]}
	{"level":"error","ts":"2025-10-02T21:19:17.559105Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:19:17.629589Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:19:17.629644Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:19:17.629662Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"38979a8318efbb8d","current-leader-member-id":"38979a8318efbb8d"}
	{"level":"info","ts":"2025-10-02T21:19:17.629728Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T21:19:17.629744Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T21:19:17.629745Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:19:17.629815Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:19:17.629827Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T21:19:17.629886Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.39:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:19:17.629896Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.39:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:19:17.629902Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.39:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:19:17.632182Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.39:2380"}
	{"level":"error","ts":"2025-10-02T21:19:17.632237Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.39:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:19:17.632257Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.39:2380"}
	{"level":"info","ts":"2025-10-02T21:19:17.632263Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-128856","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.39:2380"],"advertise-client-urls":["https://192.168.39.39:2379"]}
	
	
	==> etcd [3aeb7be2b9b11725c458d3dc5c0a3eaf5b3d997a97813ba94d7b3a9e2e525643] <==
	{"level":"warn","ts":"2025-10-02T21:19:40.820006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.833396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.848461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.866931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.881560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.891650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.904119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.921018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.925420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.939268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.948657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.966383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.974976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.977557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.983619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:40.991467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:41.001321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:41.010039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:41.025784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:41.031325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:41.040479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:41.105805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:19:43.500760Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.861634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:replication-controller\" limit:1 ","response":"range_response_count:1 size:763"}
	{"level":"info","ts":"2025-10-02T21:19:43.500862Z","caller":"traceutil/trace.go:172","msg":"trace[1289417481] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:replication-controller; range_end:; response_count:1; response_revision:471; }","duration":"151.960802ms","start":"2025-10-02T21:19:43.348868Z","end":"2025-10-02T21:19:43.500829Z","steps":["trace[1289417481] 'range keys from in-memory index tree'  (duration: 151.75895ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T21:19:43.621581Z","caller":"traceutil/trace.go:172","msg":"trace[525290820] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"107.418563ms","start":"2025-10-02T21:19:43.514145Z","end":"2025-10-02T21:19:43.621563Z","steps":["trace[525290820] 'process raft request'  (duration: 106.362845ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:20:00 up 2 min,  0 users,  load average: 1.18, 0.49, 0.18
	Linux pause-128856 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [11567d5c6ef86dfb46f79bbc6ffabddf97b216eb14a9f66cf90db5331ce637ed] <==
	W1002 21:19:17.579214       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.579264       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.579491       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.579650       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.579690       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.579858       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.579936       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.579969       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.580000       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.580032       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.580068       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.580264       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.580763       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.580830       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.580899       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.580952       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.581005       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.581058       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.581117       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.581187       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.581241       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.581449       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.581505       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:19:17.582232       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d7ea2f671499f3b49282348893e6ef436005ce83302d44ad9bb75120f294e565] <==
	I1002 21:19:41.855511       1 policy_source.go:240] refreshing policies
	I1002 21:19:41.860171       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 21:19:41.860228       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 21:19:41.860325       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 21:19:41.861770       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:19:41.863388       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:19:41.897714       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1002 21:19:41.905208       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:19:41.905428       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 21:19:41.914695       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:19:41.917344       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:19:41.934647       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 21:19:41.935776       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 21:19:41.961170       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:19:41.961151       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:19:41.965326       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:19:41.970265       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:19:42.299449       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:19:42.783267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:19:44.004398       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:19:44.050162       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 21:19:44.088515       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:19:44.098990       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:19:45.441061       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:19:45.541446       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4f4612b1df9269b91b676f8fbba243c1bbedff79a13ef12670c064228daf6327] <==
	I1002 21:18:06.555916       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:18:06.555945       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:18:06.555990       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 21:18:06.556874       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 21:18:06.557940       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 21:18:06.558022       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:18:06.558434       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 21:18:06.558502       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 21:18:06.558565       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-128856"
	I1002 21:18:06.558600       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 21:18:06.559074       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:18:06.560049       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:18:06.561413       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:18:06.562616       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:18:06.570470       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:18:06.572197       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 21:18:06.572645       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:18:06.588398       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:18:06.604398       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:18:06.605757       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 21:18:06.606980       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:18:06.607164       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:18:06.607324       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:18:06.607541       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:18:06.609410       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-controller-manager [c36437eef0a56439b31247733d8f3c8df89fcd147571f10b181bdc7ac527f5c6] <==
	I1002 21:19:45.257185       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:19:45.257239       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:19:45.260536       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 21:19:45.261817       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 21:19:45.265480       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:19:45.267655       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:19:45.269945       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 21:19:45.275345       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:19:45.278597       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:19:45.278611       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:19:45.279908       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:19:45.282237       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 21:19:45.285265       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 21:19:45.285393       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 21:19:45.285588       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:19:45.285621       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 21:19:45.286066       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 21:19:45.287351       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:19:45.287393       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:19:45.287397       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:19:45.287438       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:19:45.287449       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:19:45.287458       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:19:45.292792       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:19:45.292834       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [11d6e2c5a0d1b7a7f238a790a9e7f52002034351cfd5d9a44b96b62f5acf3b7f] <==
	I1002 21:19:43.205044       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:19:43.310185       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:19:43.310361       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.39"]
	E1002 21:19:43.310597       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:19:43.369876       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 21:19:43.370096       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 21:19:43.370201       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:19:43.380038       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:19:43.380241       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:19:43.380266       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:19:43.384648       1 config.go:200] "Starting service config controller"
	I1002 21:19:43.385930       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:19:43.386019       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:19:43.386031       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:19:43.386051       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:19:43.386056       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:19:43.385221       1 config.go:309] "Starting node config controller"
	I1002 21:19:43.386370       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:19:43.386379       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:19:43.486456       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:19:43.486513       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:19:43.486534       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [9d31f80733a3cc5ddd857f757c41cd2aa67b47084e1f991bfa8ea3b998fc0799] <==
	I1002 21:18:10.185004       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:18:10.286030       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:18:10.286076       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.39"]
	E1002 21:18:10.286147       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:18:10.331686       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 21:18:10.332580       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 21:18:10.333572       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:18:10.360169       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:18:10.360479       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:18:10.360504       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:18:10.366067       1 config.go:200] "Starting service config controller"
	I1002 21:18:10.366456       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:18:10.366501       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:18:10.366505       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:18:10.366515       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:18:10.366518       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:18:10.367750       1 config.go:309] "Starting node config controller"
	I1002 21:18:10.367776       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:18:10.367782       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:18:10.467256       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:18:10.467321       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:18:10.467358       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05387411e6ed3c96e79e0122ad74634891c4e42e18758ffb12ade2efa81ea15d] <==
	E1002 21:17:59.563572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 21:17:59.563641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 21:17:59.563713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 21:17:59.563781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 21:17:59.563927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 21:17:59.564005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 21:17:59.564070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 21:17:59.564147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 21:17:59.564205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:17:59.564330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 21:17:59.568617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 21:17:59.568760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:18:00.406507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 21:18:00.436954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 21:18:00.457717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 21:18:00.494320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 21:18:00.580937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 21:18:00.683918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 21:18:00.699424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 21:18:00.982802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1002 21:18:03.647111       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:19:17.570182       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 21:19:17.573930       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 21:19:17.573960       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 21:19:17.573981       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [71db838a4f9acc481f8110356b8ca4103f846bdeff69f6a77419e9cd8fb5b8fb] <==
	I1002 21:19:39.100047       1 serving.go:386] Generated self-signed cert in-memory
	W1002 21:19:41.825596       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 21:19:41.825618       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 21:19:41.825626       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 21:19:41.825633       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 21:19:41.849268       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:19:41.849374       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:19:41.853894       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:19:41.854176       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:19:41.855327       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:19:41.854193       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:19:41.955771       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:19:41 pause-128856 kubelet[3126]: E1002 21:19:41.439756    3126 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-128856\" not found" node="pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: E1002 21:19:41.440083    3126 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-128856\" not found" node="pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: E1002 21:19:41.440489    3126 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-128856\" not found" node="pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: E1002 21:19:41.442705    3126 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-128856\" not found" node="pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: I1002 21:19:41.860785    3126 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: E1002 21:19:41.905950    3126 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-128856\" already exists" pod="kube-system/kube-apiserver-pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: I1002 21:19:41.905991    3126 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: I1002 21:19:41.908124    3126 kubelet_node_status.go:124] "Node was previously registered" node="pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: I1002 21:19:41.908604    3126 kubelet_node_status.go:78] "Successfully registered node" node="pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: I1002 21:19:41.908715    3126 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: I1002 21:19:41.910783    3126 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: E1002 21:19:41.920702    3126 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-128856\" already exists" pod="kube-system/kube-controller-manager-pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: I1002 21:19:41.920730    3126 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: E1002 21:19:41.928528    3126 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-128856\" already exists" pod="kube-system/kube-scheduler-pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: I1002 21:19:41.928634    3126 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-128856"
	Oct 02 21:19:41 pause-128856 kubelet[3126]: E1002 21:19:41.938606    3126 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-128856\" already exists" pod="kube-system/etcd-pause-128856"
	Oct 02 21:19:42 pause-128856 kubelet[3126]: I1002 21:19:42.221794    3126 apiserver.go:52] "Watching apiserver"
	Oct 02 21:19:42 pause-128856 kubelet[3126]: I1002 21:19:42.261521    3126 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 21:19:42 pause-128856 kubelet[3126]: I1002 21:19:42.290705    3126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52d68123-da86-4724-899d-ab3a24722a2c-lib-modules\") pod \"kube-proxy-jxs4h\" (UID: \"52d68123-da86-4724-899d-ab3a24722a2c\") " pod="kube-system/kube-proxy-jxs4h"
	Oct 02 21:19:42 pause-128856 kubelet[3126]: I1002 21:19:42.290753    3126 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52d68123-da86-4724-899d-ab3a24722a2c-xtables-lock\") pod \"kube-proxy-jxs4h\" (UID: \"52d68123-da86-4724-899d-ab3a24722a2c\") " pod="kube-system/kube-proxy-jxs4h"
	Oct 02 21:19:44 pause-128856 kubelet[3126]: I1002 21:19:44.797597    3126 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 21:19:47 pause-128856 kubelet[3126]: E1002 21:19:47.365448    3126 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759439987365133207  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 02 21:19:47 pause-128856 kubelet[3126]: E1002 21:19:47.365471    3126 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759439987365133207  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 02 21:19:57 pause-128856 kubelet[3126]: E1002 21:19:57.369991    3126 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759439997367717785  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 02 21:19:57 pause-128856 kubelet[3126]: E1002 21:19:57.370047    3126 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759439997367717785  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-128856 -n pause-128856
helpers_test.go:269: (dbg) Run:  kubectl --context pause-128856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (75.24s)

                                                
                                    

Test pass (280/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 25.96
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 12.9
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.14
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.66
22 TestOffline 79.74
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 201.94
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 10.51
35 TestAddons/parallel/Registry 20.45
36 TestAddons/parallel/RegistryCreds 0.82
38 TestAddons/parallel/InspektorGadget 6.44
39 TestAddons/parallel/MetricsServer 6.09
41 TestAddons/parallel/CSI 51.16
42 TestAddons/parallel/Headlamp 19.84
43 TestAddons/parallel/CloudSpanner 6.88
44 TestAddons/parallel/LocalPath 13.14
45 TestAddons/parallel/NvidiaDevicePlugin 5.53
46 TestAddons/parallel/Yakd 11.04
48 TestAddons/StoppedEnableDisable 81.97
49 TestCertOptions 55.7
50 TestCertExpiration 299.32
52 TestForceSystemdFlag 77.66
53 TestForceSystemdEnv 41.95
55 TestKVMDriverInstallOrUpdate 0.86
59 TestErrorSpam/setup 37.56
60 TestErrorSpam/start 0.34
61 TestErrorSpam/status 0.78
62 TestErrorSpam/pause 1.63
63 TestErrorSpam/unpause 1.75
64 TestErrorSpam/stop 4.85
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 50.13
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 38.99
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.22
76 TestFunctional/serial/CacheCmd/cache/add_local 2.2
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 32.41
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.31
87 TestFunctional/serial/LogsFileCmd 1.36
88 TestFunctional/serial/InvalidService 4.82
90 TestFunctional/parallel/ConfigCmd 0.35
91 TestFunctional/parallel/DashboardCmd 14.77
92 TestFunctional/parallel/DryRun 0.29
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 0.89
98 TestFunctional/parallel/ServiceCmdConnect 21.54
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 44.87
102 TestFunctional/parallel/SSHCmd 0.42
103 TestFunctional/parallel/CpCmd 1.38
104 TestFunctional/parallel/MySQL 26.8
105 TestFunctional/parallel/FileSync 0.23
106 TestFunctional/parallel/CertSync 1.34
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
114 TestFunctional/parallel/License 0.41
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
116 TestFunctional/parallel/MountCmd/any-port 9.87
117 TestFunctional/parallel/ProfileCmd/profile_list 0.4
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
119 TestFunctional/parallel/Version/short 0.05
120 TestFunctional/parallel/Version/components 0.58
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.66
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
125 TestFunctional/parallel/ImageCommands/ImageBuild 4.51
126 TestFunctional/parallel/ImageCommands/Setup 1.93
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.41
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.78
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.07
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.68
146 TestFunctional/parallel/MountCmd/specific-port 1.72
147 TestFunctional/parallel/MountCmd/VerifyCleanup 0.77
148 TestFunctional/parallel/ServiceCmd/DeployApp 22.15
149 TestFunctional/parallel/ServiceCmd/List 1.26
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.32
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
152 TestFunctional/parallel/ServiceCmd/Format 0.29
153 TestFunctional/parallel/ServiceCmd/URL 0.28
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 236.36
162 TestMultiControlPlane/serial/DeployApp 6.57
163 TestMultiControlPlane/serial/PingHostFromPods 1.2
164 TestMultiControlPlane/serial/AddWorkerNode 44.54
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
167 TestMultiControlPlane/serial/CopyFile 12.83
168 TestMultiControlPlane/serial/StopSecondaryNode 75.6
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
170 TestMultiControlPlane/serial/RestartSecondaryNode 34.84
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 358.57
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.63
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
175 TestMultiControlPlane/serial/StopCluster 253.22
176 TestMultiControlPlane/serial/RestartCluster 114.1
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
178 TestMultiControlPlane/serial/AddSecondaryNode 72.16
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
183 TestJSONOutput/start/Command 78.35
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.71
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.63
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.97
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 75.04
215 TestMountStart/serial/StartWithMountFirst 21.54
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 21.45
218 TestMountStart/serial/VerifyMountSecond 0.37
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.36
221 TestMountStart/serial/Stop 1.23
222 TestMountStart/serial/RestartStopped 19.34
223 TestMountStart/serial/VerifyMountPostStop 0.36
226 TestMultiNode/serial/FreshStart2Nodes 95.41
227 TestMultiNode/serial/DeployApp2Nodes 5.86
228 TestMultiNode/serial/PingHostFrom2Pods 0.79
229 TestMultiNode/serial/AddNode 44
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.59
232 TestMultiNode/serial/CopyFile 7.16
233 TestMultiNode/serial/StopNode 2.39
234 TestMultiNode/serial/StartAfterStop 37.82
235 TestMultiNode/serial/RestartKeepsNodes 336.49
236 TestMultiNode/serial/DeleteNode 2.78
237 TestMultiNode/serial/StopMultiNode 167.47
238 TestMultiNode/serial/RestartMultiNode 85.26
239 TestMultiNode/serial/ValidateNameConflict 38.9
246 TestScheduledStopUnix 108.5
250 TestRunningBinaryUpgrade 151.25
252 TestKubernetesUpgrade 133.58
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
256 TestNoKubernetes/serial/StartWithK8s 81.13
257 TestNoKubernetes/serial/StartWithStopK8s 49.24
258 TestNoKubernetes/serial/Start 44.75
273 TestNetworkPlugins/group/false 3.62
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
278 TestNoKubernetes/serial/ProfileList 1.12
279 TestStoppedBinaryUpgrade/Setup 3.01
280 TestNoKubernetes/serial/Stop 1.37
281 TestNoKubernetes/serial/StartNoArgs 31.91
282 TestStoppedBinaryUpgrade/Upgrade 122.99
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
285 TestPause/serial/Start 95.04
287 TestStartStop/group/old-k8s-version/serial/FirstStart 55.1
288 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
290 TestStartStop/group/no-preload/serial/FirstStart 113.65
292 TestStartStop/group/old-k8s-version/serial/DeployApp 11.36
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
294 TestStartStop/group/old-k8s-version/serial/Stop 82.56
296 TestStartStop/group/embed-certs/serial/FirstStart 80.68
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 93.98
299 TestStartStop/group/no-preload/serial/DeployApp 11.33
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
301 TestStartStop/group/no-preload/serial/Stop 85.62
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
303 TestStartStop/group/old-k8s-version/serial/SecondStart 46.13
304 TestStartStop/group/embed-certs/serial/DeployApp 11.29
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.56
306 TestStartStop/group/embed-certs/serial/Stop 83.97
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.32
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.88
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
313 TestStartStop/group/old-k8s-version/serial/Pause 2.64
315 TestStartStop/group/newest-cni/serial/FirstStart 42.12
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/no-preload/serial/SecondStart 67.98
318 TestStartStop/group/newest-cni/serial/DeployApp 0
319 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
320 TestStartStop/group/newest-cni/serial/Stop 13.06
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
322 TestStartStop/group/embed-certs/serial/SecondStart 47.97
323 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
324 TestStartStop/group/newest-cni/serial/SecondStart 42.09
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 67.32
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
330 TestStartStop/group/no-preload/serial/Pause 3.3
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
332 TestNetworkPlugins/group/auto/Start 85.24
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
336 TestStartStop/group/newest-cni/serial/Pause 3.05
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
338 TestNetworkPlugins/group/kindnet/Start 76.73
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
340 TestStartStop/group/embed-certs/serial/Pause 3.78
341 TestNetworkPlugins/group/calico/Start 98.26
342 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.08
346 TestNetworkPlugins/group/custom-flannel/Start 85.48
347 TestNetworkPlugins/group/auto/KubeletFlags 0.24
348 TestNetworkPlugins/group/auto/NetCatPod 11.27
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
351 TestNetworkPlugins/group/kindnet/NetCatPod 10.47
352 TestNetworkPlugins/group/auto/DNS 0.19
353 TestNetworkPlugins/group/auto/Localhost 0.24
354 TestNetworkPlugins/group/auto/HairPin 0.17
355 TestNetworkPlugins/group/kindnet/DNS 0.22
356 TestNetworkPlugins/group/kindnet/Localhost 0.15
357 TestNetworkPlugins/group/kindnet/HairPin 0.17
358 TestNetworkPlugins/group/enable-default-cni/Start 84.45
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/flannel/Start 85.71
361 TestNetworkPlugins/group/calico/KubeletFlags 0.26
362 TestNetworkPlugins/group/calico/NetCatPod 10.3
363 TestNetworkPlugins/group/calico/DNS 0.19
364 TestNetworkPlugins/group/calico/Localhost 0.14
365 TestNetworkPlugins/group/calico/HairPin 0.15
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.51
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.52
368 TestNetworkPlugins/group/bridge/Start 82.49
369 TestNetworkPlugins/group/custom-flannel/DNS 0.2
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.22
374 TestNetworkPlugins/group/flannel/ControllerPod 6.01
375 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
376 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
377 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
379 TestNetworkPlugins/group/flannel/NetCatPod 9.21
380 TestNetworkPlugins/group/flannel/DNS 0.16
381 TestNetworkPlugins/group/flannel/Localhost 0.13
382 TestNetworkPlugins/group/flannel/HairPin 0.12
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
384 TestNetworkPlugins/group/bridge/NetCatPod 10.24
385 TestNetworkPlugins/group/bridge/DNS 0.13
386 TestNetworkPlugins/group/bridge/Localhost 0.12
387 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (25.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-547937 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-547937 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (25.960276364s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (25.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 20:18:39.636351  497569 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1002 20:18:39.636503  497569 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-547937
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-547937: exit status 85 (62.203255ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-547937 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-547937 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:13.718244  497581 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:13.718484  497581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:13.718493  497581 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:13.718497  497581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:13.718697  497581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	W1002 20:18:13.718828  497581 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21682-492630/.minikube/config/config.json: open /home/jenkins/minikube-integration/21682-492630/.minikube/config/config.json: no such file or directory
	I1002 20:18:13.719307  497581 out.go:368] Setting JSON to true
	I1002 20:18:13.720374  497581 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3629,"bootTime":1759432665,"procs":384,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:18:13.720458  497581 start.go:140] virtualization: kvm guest
	I1002 20:18:13.722243  497581 out.go:99] [download-only-547937] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1002 20:18:13.722400  497581 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 20:18:13.722435  497581 notify.go:220] Checking for updates...
	I1002 20:18:13.723416  497581 out.go:171] MINIKUBE_LOCATION=21682
	I1002 20:18:13.724540  497581 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:13.725542  497581 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 20:18:13.726475  497581 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 20:18:13.727384  497581 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 20:18:13.728880  497581 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:18:13.729197  497581 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:18:14.174840  497581 out.go:99] Using the kvm2 driver based on user configuration
	I1002 20:18:14.174881  497581 start.go:304] selected driver: kvm2
	I1002 20:18:14.174895  497581 start.go:924] validating driver "kvm2" against <nil>
	I1002 20:18:14.175374  497581 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:14.175543  497581 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:18:14.190604  497581 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:18:14.190632  497581 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:18:14.202932  497581 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:18:14.202973  497581 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:18:14.203525  497581 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1002 20:18:14.203764  497581 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:18:14.203797  497581 cni.go:84] Creating CNI manager for ""
	I1002 20:18:14.203859  497581 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:18:14.203870  497581 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:18:14.203952  497581 start.go:348] cluster config:
	{Name:download-only-547937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-547937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:14.204187  497581 iso.go:125] acquiring lock: {Name:mk7586bb79dc7f44da54ee16895643204aac50ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:14.205744  497581 out.go:99] Downloading VM boot image ...
	I1002 20:18:14.205829  497581 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21682-492630/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 20:18:25.425795  497581 out.go:99] Starting "download-only-547937" primary control-plane node in "download-only-547937" cluster
	I1002 20:18:25.425851  497581 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 20:18:25.531592  497581 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1002 20:18:25.531630  497581 cache.go:58] Caching tarball of preloaded images
	I1002 20:18:25.531824  497581 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 20:18:25.533433  497581 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 20:18:25.533456  497581 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 20:18:25.760593  497581 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1002 20:18:25.760747  497581 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1002 20:18:38.365550  497581 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1002 20:18:38.365964  497581 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/download-only-547937/config.json ...
	I1002 20:18:38.365996  497581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/download-only-547937/config.json: {Name:mka9f74144e2ebfab146abaebe8c05d87b972848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:38.366165  497581 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 20:18:38.366348  497581 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21682-492630/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-547937 host does not exist
	  To start a cluster, run: "minikube start -p download-only-547937"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-547937
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-533787 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-533787 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (12.895401986s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 20:18:52.891285  497569 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 20:18:52.891326  497569 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-533787
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-533787: exit status 85 (57.956382ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-547937 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-547937 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ delete  │ -p download-only-547937                                                                                                                                                                             │ download-only-547937 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ start   │ -o=json --download-only -p download-only-533787 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-533787 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:40.039952  497855 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:40.040262  497855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:40.040273  497855 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:40.040277  497855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:40.040572  497855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 20:18:40.041188  497855 out.go:368] Setting JSON to true
	I1002 20:18:40.042253  497855 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3655,"bootTime":1759432665,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:18:40.042347  497855 start.go:140] virtualization: kvm guest
	I1002 20:18:40.043958  497855 out.go:99] [download-only-533787] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:18:40.044104  497855 notify.go:220] Checking for updates...
	I1002 20:18:40.045101  497855 out.go:171] MINIKUBE_LOCATION=21682
	I1002 20:18:40.046187  497855 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:40.047587  497855 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 20:18:40.048537  497855 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 20:18:40.049583  497855 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 20:18:40.051402  497855 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:18:40.051642  497855 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:18:40.087265  497855 out.go:99] Using the kvm2 driver based on user configuration
	I1002 20:18:40.087300  497855 start.go:304] selected driver: kvm2
	I1002 20:18:40.087308  497855 start.go:924] validating driver "kvm2" against <nil>
	I1002 20:18:40.087649  497855 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:40.087758  497855 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:18:40.101747  497855 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:18:40.101780  497855 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21682-492630/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:18:40.115819  497855 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:18:40.115868  497855 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:18:40.116434  497855 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1002 20:18:40.116622  497855 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:18:40.116649  497855 cni.go:84] Creating CNI manager for ""
	I1002 20:18:40.116724  497855 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:18:40.116735  497855 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:18:40.116816  497855 start.go:348] cluster config:
	{Name:download-only-533787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-533787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:40.116914  497855 iso.go:125] acquiring lock: {Name:mk7586bb79dc7f44da54ee16895643204aac50ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:40.118110  497855 out.go:99] Starting "download-only-533787" primary control-plane node in "download-only-533787" cluster
	I1002 20:18:40.118127  497855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:40.225624  497855 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:18:40.225680  497855 cache.go:58] Caching tarball of preloaded images
	I1002 20:18:40.226481  497855 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:40.227873  497855 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1002 20:18:40.227897  497855 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 20:18:40.341989  497855 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1002 20:18:40.342046  497855 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21682-492630/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-533787 host does not exist
	  To start a cluster, run: "minikube start -p download-only-533787"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-533787
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 20:18:53.495959  497569 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-182362 --alsologtostderr --binary-mirror http://127.0.0.1:37441 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-182362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-182362
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (79.74s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-666064 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-666064 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m17.780904048s)
helpers_test.go:175: Cleaning up "offline-crio-666064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-666064
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-666064: (1.958846786s)
--- PASS: TestOffline (79.74s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-760875
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-760875: exit status 85 (54.192323ms)

                                                
                                                
-- stdout --
	* Profile "addons-760875" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-760875"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-760875
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-760875: exit status 85 (52.514591ms)

                                                
                                                
-- stdout --
	* Profile "addons-760875" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-760875"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (201.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-760875 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-760875 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m21.935130643s)
--- PASS: TestAddons/Setup (201.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-760875 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-760875 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-760875 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-760875 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [313785c7-79b4-466d-af42-76afbf3a7fe5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [313785c7-79b4-466d-af42-76afbf3a7fe5] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004059703s
addons_test.go:694: (dbg) Run:  kubectl --context addons-760875 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-760875 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-760875 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.427334ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I1002 20:22:35.475341  497569 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 20:22:35.475363  497569 kapi.go:107] duration metric: took 10.903195ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:352: "registry-66898fdd98-ntfh4" [c74ad645-ae4b-4223-925f-d29c9be1982d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006807485s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2d9m2" [f52e5ce2-9dbc-4f7a-a552-2a8d00f23cf7] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004816689s
addons_test.go:392: (dbg) Run:  kubectl --context addons-760875 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-760875 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-760875 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.530798356s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 ip
2025/10/02 20:22:55 [DEBUG] GET http://192.168.39.220:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.45s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.82s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.949515ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-760875
addons_test.go:332: (dbg) Run:  kubectl --context addons-760875 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-wcg92" [c61b302c-3af6-4eca-b091-713734b931a5] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005910494s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.09s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.951332ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-5n4lk" [7217fcab-2e35-4e35-8955-9287e23137f5] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005194398s
addons_test.go:463: (dbg) Run:  kubectl --context addons-760875 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-760875 addons disable metrics-server --alsologtostderr -v=1: (1.006366176s)
--- PASS: TestAddons/parallel/MetricsServer (6.09s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 10.911024ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-760875 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-760875 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [bce93bf7-c08b-4549-9af9-c5edd34286e2] Pending
helpers_test.go:352: "task-pv-pod" [bce93bf7-c08b-4549-9af9-c5edd34286e2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [bce93bf7-c08b-4549-9af9-c5edd34286e2] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.011153721s
addons_test.go:572: (dbg) Run:  kubectl --context addons-760875 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-760875 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-760875 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-760875 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-760875 delete pod task-pv-pod: (1.132510466s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-760875 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-760875 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-760875 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [f277121b-c077-4dd0-ba25-b946ac2c01ee] Pending
helpers_test.go:352: "task-pv-pod-restore" [f277121b-c077-4dd0-ba25-b946ac2c01ee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [f277121b-c077-4dd0-ba25-b946ac2c01ee] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004184319s
addons_test.go:614: (dbg) Run:  kubectl --context addons-760875 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-760875 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-760875 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-760875 addons disable volumesnapshots --alsologtostderr -v=1: (1.129152804s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-760875 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.818879538s)
--- PASS: TestAddons/parallel/CSI (51.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-760875 --alsologtostderr -v=1
I1002 20:22:35.464468  497569 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-kclj9" [7d76d50d-9a65-416d-bdae-695ea324aaa4] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-kclj9" [7d76d50d-9a65-416d-bdae-695ea324aaa4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-kclj9" [7d76d50d-9a65-416d-bdae-695ea324aaa4] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004682631s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-760875 addons disable headlamp --alsologtostderr -v=1: (5.971931529s)
--- PASS: TestAddons/parallel/Headlamp (19.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-xb8cv" [97b428d6-683a-4fec-8055-e2f7b28e029e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.008125181s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.88s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-760875 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-760875 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-760875 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [fd7db70f-e629-4651-9071-87d2d247a378] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [fd7db70f-e629-4651-9071-87d2d247a378] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [fd7db70f-e629-4651-9071-87d2d247a378] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003226099s
addons_test.go:967: (dbg) Run:  kubectl --context addons-760875 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 ssh "cat /opt/local-path-provisioner/pvc-90967178-cfba-4823-8096-89c566fceab3_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-760875 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-760875 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-fvbmg" [44467d83-4766-45cb-a8b3-8ed6ef1292e6] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004639482s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-h6vw7" [9fe37727-086e-4a30-97f0-5eedca2954a8] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.010872994s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-760875 addons disable yakd --alsologtostderr -v=1: (6.029759845s)
--- PASS: TestAddons/parallel/Yakd (11.04s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (81.97s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-760875
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-760875: (1m21.703532973s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-760875
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-760875
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-760875
--- PASS: TestAddons/StoppedEnableDisable (81.97s)

                                                
                                    
x
+
TestCertOptions (55.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-664739 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-664739 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.969040601s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-664739 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-664739 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-664739 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-664739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-664739
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-664739: (1.231355128s)
--- PASS: TestCertOptions (55.70s)

                                                
                                    
x
+
TestCertExpiration (299.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-852898 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-852898 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.084984318s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-852898 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-852898 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.39062831s)
helpers_test.go:175: Cleaning up "cert-expiration-852898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-852898
--- PASS: TestCertExpiration (299.32s)

                                                
                                    
x
+
TestForceSystemdFlag (77.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-093085 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-093085 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.595574919s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-093085 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-093085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-093085
--- PASS: TestForceSystemdFlag (77.66s)

                                                
                                    
x
+
TestForceSystemdEnv (41.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-727741 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-727741 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.047106429s)
helpers_test.go:175: Cleaning up "force-systemd-env-727741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-727741
--- PASS: TestForceSystemdEnv (41.95s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.86s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (0.86s)

                                                
                                    
x
+
TestErrorSpam/setup (37.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-425430 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-425430 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:27:16.840901  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:16.847280  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:16.858632  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:16.879968  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:16.921336  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:17.002782  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:17.164391  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:17.486204  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:18.128282  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:19.410226  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:21.972891  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:27.094782  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:27:37.336557  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-425430 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-425430 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.562146445s)
--- PASS: TestErrorSpam/setup (37.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (4.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 stop: (1.805694241s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 stop: (1.02230892s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-425430 --log_dir /tmp/nospam-425430 stop: (2.026278713s)
--- PASS: TestErrorSpam/stop (4.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21682-492630/.minikube/files/etc/test/nested/copy/497569/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-175435 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:27:57.818087  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-175435 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.129328868s)
--- PASS: TestFunctional/serial/StartWithProxy (50.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 20:28:37.329590  497569 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-175435 --alsologtostderr -v=8
E1002 20:28:38.780028  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-175435 --alsologtostderr -v=8: (38.993182459s)
functional_test.go:678: soft start took 38.993877271s for "functional-175435" cluster.
I1002 20:29:16.323132  497569 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (38.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-175435 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 cache add registry.k8s.io/pause:3.1: (1.031033349s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 cache add registry.k8s.io/pause:3.3: (1.198760883s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-175435 /tmp/TestFunctionalserialCacheCmdcacheadd_local3702431587/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 cache add minikube-local-cache-test:functional-175435
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 cache add minikube-local-cache-test:functional-175435: (1.900370804s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 cache delete minikube-local-cache-test:functional-175435
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-175435
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-175435 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.662975ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 kubectl -- --context functional-175435 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-175435 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-175435 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-175435 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.409693807s)
functional_test.go:776: restart took 32.409836426s for "functional-175435" cluster.
I1002 20:29:56.477002  497569 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-175435 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 logs: (1.306994273s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 logs --file /tmp/TestFunctionalserialLogsFileCmd1924002387/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 logs --file /tmp/TestFunctionalserialLogsFileCmd1924002387/001/logs.txt: (1.361656218s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-175435 apply -f testdata/invalidsvc.yaml
E1002 20:30:00.701357  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-175435
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-175435: exit status 115 (293.086282ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.180:30525 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-175435 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-175435 delete -f testdata/invalidsvc.yaml: (1.319546972s)
--- PASS: TestFunctional/serial/InvalidService (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-175435 config get cpus: exit status 14 (57.697612ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-175435 config get cpus: exit status 14 (51.720338ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-175435 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-175435 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 504951: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.77s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-175435 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-175435 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (130.543721ms)

                                                
                                                
-- stdout --
	* [functional-175435] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:30:04.665846  504462 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:30:04.666102  504462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:30:04.666111  504462 out.go:374] Setting ErrFile to fd 2...
	I1002 20:30:04.666115  504462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:30:04.666375  504462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 20:30:04.666912  504462 out.go:368] Setting JSON to false
	I1002 20:30:04.667814  504462 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4340,"bootTime":1759432665,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:30:04.667906  504462 start.go:140] virtualization: kvm guest
	I1002 20:30:04.669400  504462 out.go:179] * [functional-175435] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:30:04.670464  504462 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:30:04.670480  504462 notify.go:220] Checking for updates...
	I1002 20:30:04.672261  504462 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:30:04.673261  504462 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 20:30:04.674210  504462 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 20:30:04.675138  504462 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:30:04.676329  504462 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:30:04.677697  504462 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:30:04.678269  504462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:30:04.678340  504462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:30:04.692899  504462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45979
	I1002 20:30:04.693402  504462 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:30:04.694106  504462 main.go:141] libmachine: Using API Version  1
	I1002 20:30:04.694139  504462 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:30:04.694476  504462 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:30:04.694671  504462 main.go:141] libmachine: (functional-175435) Calling .DriverName
	I1002 20:30:04.694914  504462 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:30:04.695191  504462 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:30:04.695229  504462 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:30:04.708988  504462 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I1002 20:30:04.709429  504462 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:30:04.709893  504462 main.go:141] libmachine: Using API Version  1
	I1002 20:30:04.709914  504462 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:30:04.710233  504462 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:30:04.710451  504462 main.go:141] libmachine: (functional-175435) Calling .DriverName
	I1002 20:30:04.740794  504462 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 20:30:04.741698  504462 start.go:304] selected driver: kvm2
	I1002 20:30:04.741721  504462 start.go:924] validating driver "kvm2" against &{Name:functional-175435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-175435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:30:04.741854  504462 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:30:04.743683  504462 out.go:203] 
	W1002 20:30:04.744695  504462 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 20:30:04.745599  504462 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-175435 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-175435 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-175435 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (166.047994ms)

                                                
                                                
-- stdout --
	* [functional-175435] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:30:04.521603  504351 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:30:04.521934  504351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:30:04.521943  504351 out.go:374] Setting ErrFile to fd 2...
	I1002 20:30:04.521948  504351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:30:04.522535  504351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 20:30:04.523218  504351 out.go:368] Setting JSON to false
	I1002 20:30:04.524570  504351 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4339,"bootTime":1759432665,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:30:04.524754  504351 start.go:140] virtualization: kvm guest
	I1002 20:30:04.529264  504351 out.go:179] * [functional-175435] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 20:30:04.530692  504351 notify.go:220] Checking for updates...
	I1002 20:30:04.531203  504351 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:30:04.532220  504351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:30:04.533414  504351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 20:30:04.534744  504351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 20:30:04.535801  504351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:30:04.537249  504351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:30:04.538821  504351 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:30:04.539409  504351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:30:04.539490  504351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:30:04.556309  504351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38401
	I1002 20:30:04.556854  504351 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:30:04.557382  504351 main.go:141] libmachine: Using API Version  1
	I1002 20:30:04.557400  504351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:30:04.557786  504351 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:30:04.557978  504351 main.go:141] libmachine: (functional-175435) Calling .DriverName
	I1002 20:30:04.558238  504351 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:30:04.558531  504351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:30:04.558578  504351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:30:04.572330  504351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1002 20:30:04.572689  504351 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:30:04.573276  504351 main.go:141] libmachine: Using API Version  1
	I1002 20:30:04.573299  504351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:30:04.573659  504351 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:30:04.573882  504351 main.go:141] libmachine: (functional-175435) Calling .DriverName
	I1002 20:30:04.607761  504351 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1002 20:30:04.608880  504351 start.go:304] selected driver: kvm2
	I1002 20:30:04.608895  504351 start.go:924] validating driver "kvm2" against &{Name:functional-175435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-175435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:30:04.609026  504351 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:30:04.610861  504351 out.go:203] 
	W1002 20:30:04.611847  504351 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 20:30:04.612892  504351 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-175435 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-175435 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-mk2v7" [0b024410-2524-4f61-b7a1-ee3b52c940d1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-mk2v7" [0b024410-2524-4f61-b7a1-ee3b52c940d1] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.007139937s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.180:30935
functional_test.go:1680: http://192.168.39.180:30935: success! body:
Request served by hello-node-connect-7d85dfc575-mk2v7

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.180:30935
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [0098d185-6998-415d-89c5-d9007d76f77e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005459177s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-175435 get storageclass -o=json
2025/10/02 20:30:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-175435 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-175435 get pvc myclaim -o=json
I1002 20:30:19.519777  497569 retry.go:31] will retry after 1.783647698s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:13dfaab2-d732-4d40-9656-dc14156bdc93 ResourceVersion:807 Generation:0 CreationTimestamp:2025-10-02 20:30:19 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001daa7e0 VolumeMode:0xc001daa7f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-175435 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-175435 apply -f testdata/storage-provisioner/pod.yaml
I1002 20:30:21.480661  497569 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f6afd7f2-f94a-426a-a1a3-ece659b54ea6] Pending
helpers_test.go:352: "sp-pod" [f6afd7f2-f94a-426a-a1a3-ece659b54ea6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f6afd7f2-f94a-426a-a1a3-ece659b54ea6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.00440225s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-175435 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-175435 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-175435 delete -f testdata/storage-provisioner/pod.yaml: (1.251226007s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-175435 apply -f testdata/storage-provisioner/pod.yaml
I1002 20:30:51.975220  497569 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [af80013b-c4c3-42f8-a4ce-82c666c74160] Pending
helpers_test.go:352: "sp-pod" [af80013b-c4c3-42f8-a4ce-82c666c74160] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [af80013b-c4c3-42f8-a4ce-82c666c74160] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00356603s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-175435 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh -n functional-175435 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 cp functional-175435:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3271728424/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh -n functional-175435 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh -n functional-175435 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-175435 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-vmxsn" [f14ef9eb-7e40-45d6-8a5e-b59241ed797c] Pending
helpers_test.go:352: "mysql-5bb876957f-vmxsn" [f14ef9eb-7e40-45d6-8a5e-b59241ed797c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-vmxsn" [f14ef9eb-7e40-45d6-8a5e-b59241ed797c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.008437745s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-175435 exec mysql-5bb876957f-vmxsn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-175435 exec mysql-5bb876957f-vmxsn -- mysql -ppassword -e "show databases;": exit status 1 (327.710674ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1002 20:30:40.388953  497569 retry.go:31] will retry after 1.046387883s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-175435 exec mysql-5bb876957f-vmxsn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-175435 exec mysql-5bb876957f-vmxsn -- mysql -ppassword -e "show databases;": exit status 1 (136.964896ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1002 20:30:41.572636  497569 retry.go:31] will retry after 1.986220789s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-175435 exec mysql-5bb876957f-vmxsn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.80s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/497569/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo cat /etc/test/nested/copy/497569/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/497569.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo cat /etc/ssl/certs/497569.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/497569.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo cat /usr/share/ca-certificates/497569.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/4975692.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo cat /etc/ssl/certs/4975692.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/4975692.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo cat /usr/share/ca-certificates/4975692.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-175435 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-175435 ssh "sudo systemctl is-active docker": exit status 1 (212.332105ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-175435 ssh "sudo systemctl is-active containerd": exit status 1 (227.212098ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdany-port2709125706/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759437004385669358" to /tmp/TestFunctionalparallelMountCmdany-port2709125706/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759437004385669358" to /tmp/TestFunctionalparallelMountCmdany-port2709125706/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759437004385669358" to /tmp/TestFunctionalparallelMountCmdany-port2709125706/001/test-1759437004385669358
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-175435 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.83606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:30:04.629877  497569 retry.go:31] will retry after 642.683808ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 20:30 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 20:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 20:30 test-1759437004385669358
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh cat /mount-9p/test-1759437004385669358
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-175435 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [ac006445-65c3-4c11-9419-094fe3867d7e] Pending
helpers_test.go:352: "busybox-mount" [ac006445-65c3-4c11-9419-094fe3867d7e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [ac006445-65c3-4c11-9419-094fe3867d7e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [ac006445-65c3-4c11-9419-094fe3867d7e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005021605s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-175435 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdany-port2709125706/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "335.206256ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.45293ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "316.95356ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "52.04332ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-175435 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-175435  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-175435  │ 6e845ac53c879 │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-175435 image ls --format table --alsologtostderr:
I1002 20:30:34.109092  506432 out.go:360] Setting OutFile to fd 1 ...
I1002 20:30:34.109361  506432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:34.109370  506432 out.go:374] Setting ErrFile to fd 2...
I1002 20:30:34.109374  506432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:34.109555  506432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
I1002 20:30:34.110155  506432 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:34.110264  506432 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:34.110698  506432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:34.110797  506432 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:34.125389  506432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44209
I1002 20:30:34.125891  506432 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:34.126419  506432 main.go:141] libmachine: Using API Version  1
I1002 20:30:34.126438  506432 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:34.126868  506432 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:34.127089  506432 main.go:141] libmachine: (functional-175435) Calling .GetState
I1002 20:30:34.129106  506432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:34.129167  506432 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:34.142489  506432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
I1002 20:30:34.142901  506432 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:34.143359  506432 main.go:141] libmachine: Using API Version  1
I1002 20:30:34.143386  506432 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:34.143718  506432 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:34.143914  506432 main.go:141] libmachine: (functional-175435) Calling .DriverName
I1002 20:30:34.144112  506432 ssh_runner.go:195] Run: systemctl --version
I1002 20:30:34.144138  506432 main.go:141] libmachine: (functional-175435) Calling .GetSSHHostname
I1002 20:30:34.146987  506432 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:34.147386  506432 main.go:141] libmachine: (functional-175435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:ec:6f", ip: ""} in network mk-functional-175435: {Iface:virbr1 ExpiryTime:2025-10-02 21:28:01 +0000 UTC Type:0 Mac:52:54:00:b1:ec:6f Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:functional-175435 Clientid:01:52:54:00:b1:ec:6f}
I1002 20:30:34.147412  506432 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined IP address 192.168.39.180 and MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:34.147550  506432 main.go:141] libmachine: (functional-175435) Calling .GetSSHPort
I1002 20:30:34.147731  506432 main.go:141] libmachine: (functional-175435) Calling .GetSSHKeyPath
I1002 20:30:34.147872  506432 main.go:141] libmachine: (functional-175435) Calling .GetSSHUsername
I1002 20:30:34.148012  506432 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/functional-175435/id_rsa Username:docker}
I1002 20:30:34.249187  506432 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 20:30:34.297290  506432 main.go:141] libmachine: Making call to close driver server
I1002 20:30:34.297313  506432 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:34.297644  506432 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:34.297665  506432 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 20:30:34.297675  506432 main.go:141] libmachine: Making call to close driver server
I1002 20:30:34.297684  506432 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:34.297689  506432 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
I1002 20:30:34.297936  506432 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:34.297962  506432 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 20:30:34.298059  506432 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-175435 image ls --format json --alsologtostderr:
[{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3f
e0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repo
Digests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d9
24aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags
":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repo
Tags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-175435"],"size":"4943877"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e845ac53c879afdc7083d69542d7fe96ed6ce3a311787b98ca319a5597c06eb","repoDigests":["localhost/minikube-local-cache-test@sha256:887e9025dc7eb6e8f7ac5b0fbf2f21e2d234b1d51578b9d1e29f19e8ffffcdc6"],"repoTags":["localhost/minikube-local-cache-test:functional-175435"],"size":"3330"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e
9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-175435 image ls --format json --alsologtostderr:
I1002 20:30:33.459273  506408 out.go:360] Setting OutFile to fd 1 ...
I1002 20:30:33.459636  506408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:33.459652  506408 out.go:374] Setting ErrFile to fd 2...
I1002 20:30:33.459659  506408 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:33.459975  506408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
I1002 20:30:33.460846  506408 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:33.460989  506408 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:33.461586  506408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:33.461677  506408 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:33.476130  506408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
I1002 20:30:33.476721  506408 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:33.477309  506408 main.go:141] libmachine: Using API Version  1
I1002 20:30:33.477336  506408 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:33.477743  506408 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:33.478035  506408 main.go:141] libmachine: (functional-175435) Calling .GetState
I1002 20:30:33.479989  506408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:33.480027  506408 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:33.501688  506408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35209
I1002 20:30:33.502176  506408 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:33.502701  506408 main.go:141] libmachine: Using API Version  1
I1002 20:30:33.502746  506408 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:33.503219  506408 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:33.503463  506408 main.go:141] libmachine: (functional-175435) Calling .DriverName
I1002 20:30:33.503727  506408 ssh_runner.go:195] Run: systemctl --version
I1002 20:30:33.503763  506408 main.go:141] libmachine: (functional-175435) Calling .GetSSHHostname
I1002 20:30:33.507357  506408 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:33.507966  506408 main.go:141] libmachine: (functional-175435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:ec:6f", ip: ""} in network mk-functional-175435: {Iface:virbr1 ExpiryTime:2025-10-02 21:28:01 +0000 UTC Type:0 Mac:52:54:00:b1:ec:6f Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:functional-175435 Clientid:01:52:54:00:b1:ec:6f}
I1002 20:30:33.508025  506408 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined IP address 192.168.39.180 and MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:33.508362  506408 main.go:141] libmachine: (functional-175435) Calling .GetSSHPort
I1002 20:30:33.508637  506408 main.go:141] libmachine: (functional-175435) Calling .GetSSHKeyPath
I1002 20:30:33.508838  506408 main.go:141] libmachine: (functional-175435) Calling .GetSSHUsername
I1002 20:30:33.509047  506408 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/functional-175435/id_rsa Username:docker}
I1002 20:30:33.611563  506408 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 20:30:34.052852  506408 main.go:141] libmachine: Making call to close driver server
I1002 20:30:34.052870  506408 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:34.053168  506408 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:34.053187  506408 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 20:30:34.053196  506408 main.go:141] libmachine: Making call to close driver server
I1002 20:30:34.053204  506408 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:34.053199  506408 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
I1002 20:30:34.053569  506408 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
I1002 20:30:34.053584  506408 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:34.053605  506408 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-175435 image ls --format yaml --alsologtostderr:
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e845ac53c879afdc7083d69542d7fe96ed6ce3a311787b98ca319a5597c06eb
repoDigests:
- localhost/minikube-local-cache-test@sha256:887e9025dc7eb6e8f7ac5b0fbf2f21e2d234b1d51578b9d1e29f19e8ffffcdc6
repoTags:
- localhost/minikube-local-cache-test:functional-175435
size: "3330"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-175435
size: "4943877"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-175435 image ls --format yaml --alsologtostderr:
I1002 20:30:34.356239  506456 out.go:360] Setting OutFile to fd 1 ...
I1002 20:30:34.356624  506456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:34.356636  506456 out.go:374] Setting ErrFile to fd 2...
I1002 20:30:34.356640  506456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:34.356875  506456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
I1002 20:30:34.357489  506456 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:34.357586  506456 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:34.357975  506456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:34.358041  506456 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:34.371357  506456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
I1002 20:30:34.371777  506456 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:34.372307  506456 main.go:141] libmachine: Using API Version  1
I1002 20:30:34.372336  506456 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:34.372764  506456 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:34.372955  506456 main.go:141] libmachine: (functional-175435) Calling .GetState
I1002 20:30:34.375073  506456 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:34.375111  506456 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:34.387989  506456 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
I1002 20:30:34.388466  506456 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:34.388988  506456 main.go:141] libmachine: Using API Version  1
I1002 20:30:34.389007  506456 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:34.389322  506456 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:34.389521  506456 main.go:141] libmachine: (functional-175435) Calling .DriverName
I1002 20:30:34.389815  506456 ssh_runner.go:195] Run: systemctl --version
I1002 20:30:34.389854  506456 main.go:141] libmachine: (functional-175435) Calling .GetSSHHostname
I1002 20:30:34.392858  506456 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:34.393293  506456 main.go:141] libmachine: (functional-175435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:ec:6f", ip: ""} in network mk-functional-175435: {Iface:virbr1 ExpiryTime:2025-10-02 21:28:01 +0000 UTC Type:0 Mac:52:54:00:b1:ec:6f Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:functional-175435 Clientid:01:52:54:00:b1:ec:6f}
I1002 20:30:34.393336  506456 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined IP address 192.168.39.180 and MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:34.393407  506456 main.go:141] libmachine: (functional-175435) Calling .GetSSHPort
I1002 20:30:34.393600  506456 main.go:141] libmachine: (functional-175435) Calling .GetSSHKeyPath
I1002 20:30:34.393765  506456 main.go:141] libmachine: (functional-175435) Calling .GetSSHUsername
I1002 20:30:34.393891  506456 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/functional-175435/id_rsa Username:docker}
I1002 20:30:34.474238  506456 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 20:30:34.523299  506456 main.go:141] libmachine: Making call to close driver server
I1002 20:30:34.523313  506456 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:34.523626  506456 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:34.523650  506456 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 20:30:34.523662  506456 main.go:141] libmachine: Making call to close driver server
I1002 20:30:34.523671  506456 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:34.523727  506456 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
I1002 20:30:34.523953  506456 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
I1002 20:30:34.523982  506456 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:34.523991  506456 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-175435 ssh pgrep buildkitd: exit status 1 (204.71591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image build -t localhost/my-image:functional-175435 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 image build -t localhost/my-image:functional-175435 testdata/build --alsologtostderr: (3.81705955s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-175435 image build -t localhost/my-image:functional-175435 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2c9e72104fd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-175435
--> 4f8d8b223fe
Successfully tagged localhost/my-image:functional-175435
4f8d8b223feaf0f1da6f6cb4f056e00a262e59902bd00a4eb4c0355774e0c9a6
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-175435 image build -t localhost/my-image:functional-175435 testdata/build --alsologtostderr:
I1002 20:30:34.786584  506525 out.go:360] Setting OutFile to fd 1 ...
I1002 20:30:34.786901  506525 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:34.786913  506525 out.go:374] Setting ErrFile to fd 2...
I1002 20:30:34.786918  506525 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:34.787135  506525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
I1002 20:30:34.787731  506525 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:34.788413  506525 config.go:182] Loaded profile config "functional-175435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:34.788772  506525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:34.788816  506525 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:34.802429  506525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
I1002 20:30:34.803002  506525 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:34.803499  506525 main.go:141] libmachine: Using API Version  1
I1002 20:30:34.803523  506525 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:34.803973  506525 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:34.804203  506525 main.go:141] libmachine: (functional-175435) Calling .GetState
I1002 20:30:34.806464  506525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 20:30:34.806507  506525 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 20:30:34.823195  506525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
I1002 20:30:34.823765  506525 main.go:141] libmachine: () Calling .GetVersion
I1002 20:30:34.824311  506525 main.go:141] libmachine: Using API Version  1
I1002 20:30:34.824337  506525 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 20:30:34.824674  506525 main.go:141] libmachine: () Calling .GetMachineName
I1002 20:30:34.824923  506525 main.go:141] libmachine: (functional-175435) Calling .DriverName
I1002 20:30:34.825160  506525 ssh_runner.go:195] Run: systemctl --version
I1002 20:30:34.825194  506525 main.go:141] libmachine: (functional-175435) Calling .GetSSHHostname
I1002 20:30:34.828791  506525 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:34.829289  506525 main.go:141] libmachine: (functional-175435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:ec:6f", ip: ""} in network mk-functional-175435: {Iface:virbr1 ExpiryTime:2025-10-02 21:28:01 +0000 UTC Type:0 Mac:52:54:00:b1:ec:6f Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:functional-175435 Clientid:01:52:54:00:b1:ec:6f}
I1002 20:30:34.829328  506525 main.go:141] libmachine: (functional-175435) DBG | domain functional-175435 has defined IP address 192.168.39.180 and MAC address 52:54:00:b1:ec:6f in network mk-functional-175435
I1002 20:30:34.829520  506525 main.go:141] libmachine: (functional-175435) Calling .GetSSHPort
I1002 20:30:34.829699  506525 main.go:141] libmachine: (functional-175435) Calling .GetSSHKeyPath
I1002 20:30:34.829863  506525 main.go:141] libmachine: (functional-175435) Calling .GetSSHUsername
I1002 20:30:34.830025  506525 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/functional-175435/id_rsa Username:docker}
I1002 20:30:34.913048  506525 build_images.go:161] Building image from path: /tmp/build.2790096882.tar
I1002 20:30:34.913113  506525 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 20:30:34.926226  506525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2790096882.tar
I1002 20:30:34.930782  506525 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2790096882.tar: stat -c "%s %y" /var/lib/minikube/build/build.2790096882.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2790096882.tar': No such file or directory
I1002 20:30:34.930811  506525 ssh_runner.go:362] scp /tmp/build.2790096882.tar --> /var/lib/minikube/build/build.2790096882.tar (3072 bytes)
I1002 20:30:34.960328  506525 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2790096882
I1002 20:30:34.971125  506525 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2790096882 -xf /var/lib/minikube/build/build.2790096882.tar
I1002 20:30:34.983240  506525 crio.go:315] Building image: /var/lib/minikube/build/build.2790096882
I1002 20:30:34.983314  506525 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-175435 /var/lib/minikube/build/build.2790096882 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1002 20:30:38.477359  506525 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-175435 /var/lib/minikube/build/build.2790096882 --cgroup-manager=cgroupfs: (3.49401142s)
I1002 20:30:38.477447  506525 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2790096882
I1002 20:30:38.513287  506525 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2790096882.tar
I1002 20:30:38.543637  506525 build_images.go:217] Built localhost/my-image:functional-175435 from /tmp/build.2790096882.tar
I1002 20:30:38.543684  506525 build_images.go:133] succeeded building to: functional-175435
I1002 20:30:38.543690  506525 build_images.go:134] failed building to: 
I1002 20:30:38.543740  506525 main.go:141] libmachine: Making call to close driver server
I1002 20:30:38.543757  506525 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:38.544061  506525 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:38.544103  506525 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
I1002 20:30:38.544128  506525 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 20:30:38.544142  506525 main.go:141] libmachine: Making call to close driver server
I1002 20:30:38.544151  506525 main.go:141] libmachine: (functional-175435) Calling .Close
I1002 20:30:38.544403  506525 main.go:141] libmachine: Successfully made call to close driver server
I1002 20:30:38.544417  506525 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 20:30:38.544505  506525 main.go:141] libmachine: (functional-175435) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.915204115s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-175435
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image load --daemon kicbase/echo-server:functional-175435 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 image load --daemon kicbase/echo-server:functional-175435 --alsologtostderr: (1.200132469s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image load --daemon kicbase/echo-server:functional-175435 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-175435
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image load --daemon kicbase/echo-server:functional-175435 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image save kicbase/echo-server:functional-175435 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image rm kicbase/echo-server:functional-175435 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-175435
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 image save --daemon kicbase/echo-server:functional-175435 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-175435
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdspecific-port2546145923/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-175435 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.211969ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:30:14.493832  497569 retry.go:31] will retry after 430.791514ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdspecific-port2546145923/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-175435 ssh "sudo umount -f /mount-9p": exit status 1 (241.563566ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-175435 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdspecific-port2546145923/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup7109070/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup7109070/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup7109070/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-175435 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup7109070/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup7109070/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-175435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup7109070/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-175435 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-175435 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-pfzpj" [3a87f5c5-99b0-4b9f-bd14-851ba5c6b52d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-pfzpj" [3a87f5c5-99b0-4b9f-bd14-851ba5c6b52d] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.004252244s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 service list: (1.26345526s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-175435 service list -o json: (1.321872324s)
functional_test.go:1504: Took "1.321976445s" to run "out/minikube-linux-amd64 -p functional-175435 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.180:32275
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-175435 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.180:32275
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-175435
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-175435
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-175435
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (236.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:32:16.831881  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:32:44.543634  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m55.687735509s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (236.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 kubectl -- rollout status deployment/busybox: (4.37867272s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-2mpdk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-pszgr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-v6xtn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-2mpdk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-pszgr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-v6xtn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-2mpdk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-pszgr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-v6xtn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-2mpdk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-2mpdk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-pszgr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-pszgr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-v6xtn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 kubectl -- exec busybox-7b57f96db7-v6xtn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 node add --alsologtostderr -v 5
E1002 20:35:06.129028  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:06.135481  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:06.146923  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:06.168284  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:06.209757  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:06.291254  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:06.452805  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:06.774145  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:07.416184  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:08.698501  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:11.260188  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:16.381599  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:26.624010  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:47.105633  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 node add --alsologtostderr -v 5: (43.713594475s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-942958 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp testdata/cp-test.txt ha-942958:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile675888586/001/cp-test_ha-942958.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958:/home/docker/cp-test.txt ha-942958-m02:/home/docker/cp-test_ha-942958_ha-942958-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m02 "sudo cat /home/docker/cp-test_ha-942958_ha-942958-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958:/home/docker/cp-test.txt ha-942958-m03:/home/docker/cp-test_ha-942958_ha-942958-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m03 "sudo cat /home/docker/cp-test_ha-942958_ha-942958-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958:/home/docker/cp-test.txt ha-942958-m04:/home/docker/cp-test_ha-942958_ha-942958-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m04 "sudo cat /home/docker/cp-test_ha-942958_ha-942958-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp testdata/cp-test.txt ha-942958-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile675888586/001/cp-test_ha-942958-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m02:/home/docker/cp-test.txt ha-942958:/home/docker/cp-test_ha-942958-m02_ha-942958.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958 "sudo cat /home/docker/cp-test_ha-942958-m02_ha-942958.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m02:/home/docker/cp-test.txt ha-942958-m03:/home/docker/cp-test_ha-942958-m02_ha-942958-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m03 "sudo cat /home/docker/cp-test_ha-942958-m02_ha-942958-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m02:/home/docker/cp-test.txt ha-942958-m04:/home/docker/cp-test_ha-942958-m02_ha-942958-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m04 "sudo cat /home/docker/cp-test_ha-942958-m02_ha-942958-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp testdata/cp-test.txt ha-942958-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile675888586/001/cp-test_ha-942958-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m03:/home/docker/cp-test.txt ha-942958:/home/docker/cp-test_ha-942958-m03_ha-942958.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958 "sudo cat /home/docker/cp-test_ha-942958-m03_ha-942958.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m03:/home/docker/cp-test.txt ha-942958-m02:/home/docker/cp-test_ha-942958-m03_ha-942958-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m02 "sudo cat /home/docker/cp-test_ha-942958-m03_ha-942958-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m03:/home/docker/cp-test.txt ha-942958-m04:/home/docker/cp-test_ha-942958-m03_ha-942958-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m04 "sudo cat /home/docker/cp-test_ha-942958-m03_ha-942958-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp testdata/cp-test.txt ha-942958-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile675888586/001/cp-test_ha-942958-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m04:/home/docker/cp-test.txt ha-942958:/home/docker/cp-test_ha-942958-m04_ha-942958.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958 "sudo cat /home/docker/cp-test_ha-942958-m04_ha-942958.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m04:/home/docker/cp-test.txt ha-942958-m02:/home/docker/cp-test_ha-942958-m04_ha-942958-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m02 "sudo cat /home/docker/cp-test_ha-942958-m04_ha-942958-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 cp ha-942958-m04:/home/docker/cp-test.txt ha-942958-m03:/home/docker/cp-test_ha-942958-m04_ha-942958-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 ssh -n ha-942958-m03 "sudo cat /home/docker/cp-test_ha-942958-m04_ha-942958-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (75.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 node stop m02 --alsologtostderr -v 5
E1002 20:36:28.067077  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:37:16.831608  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 node stop m02 --alsologtostderr -v 5: (1m14.978457046s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5: exit status 7 (619.627258ms)

                                                
                                                
-- stdout --
	ha-942958
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942958-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-942958-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942958-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:37:17.670850  511473 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:37:17.671060  511473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:37:17.671068  511473 out.go:374] Setting ErrFile to fd 2...
	I1002 20:37:17.671072  511473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:37:17.671275  511473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 20:37:17.671464  511473 out.go:368] Setting JSON to false
	I1002 20:37:17.671493  511473 mustload.go:65] Loading cluster: ha-942958
	I1002 20:37:17.671604  511473 notify.go:220] Checking for updates...
	I1002 20:37:17.672040  511473 config.go:182] Loaded profile config "ha-942958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:37:17.672071  511473 status.go:174] checking status of ha-942958 ...
	I1002 20:37:17.672585  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:17.672640  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:17.687309  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
	I1002 20:37:17.687819  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:17.688462  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:17.688488  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:17.688895  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:17.689115  511473 main.go:141] libmachine: (ha-942958) Calling .GetState
	I1002 20:37:17.691101  511473 status.go:371] ha-942958 host status = "Running" (err=<nil>)
	I1002 20:37:17.691117  511473 host.go:66] Checking if "ha-942958" exists ...
	I1002 20:37:17.691409  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:17.691445  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:17.704694  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I1002 20:37:17.705208  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:17.705685  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:17.705738  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:17.706047  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:17.706228  511473 main.go:141] libmachine: (ha-942958) Calling .GetIP
	I1002 20:37:17.709387  511473 main.go:141] libmachine: (ha-942958) DBG | domain ha-942958 has defined MAC address 52:54:00:cf:26:f7 in network mk-ha-942958
	I1002 20:37:17.709922  511473 main.go:141] libmachine: (ha-942958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:26:f7", ip: ""} in network mk-ha-942958: {Iface:virbr1 ExpiryTime:2025-10-02 21:31:15 +0000 UTC Type:0 Mac:52:54:00:cf:26:f7 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-942958 Clientid:01:52:54:00:cf:26:f7}
	I1002 20:37:17.709957  511473 main.go:141] libmachine: (ha-942958) DBG | domain ha-942958 has defined IP address 192.168.39.172 and MAC address 52:54:00:cf:26:f7 in network mk-ha-942958
	I1002 20:37:17.710179  511473 host.go:66] Checking if "ha-942958" exists ...
	I1002 20:37:17.710496  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:17.710540  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:17.724280  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I1002 20:37:17.724752  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:17.725142  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:17.725166  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:17.725496  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:17.725726  511473 main.go:141] libmachine: (ha-942958) Calling .DriverName
	I1002 20:37:17.725949  511473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:37:17.725993  511473 main.go:141] libmachine: (ha-942958) Calling .GetSSHHostname
	I1002 20:37:17.728595  511473 main.go:141] libmachine: (ha-942958) DBG | domain ha-942958 has defined MAC address 52:54:00:cf:26:f7 in network mk-ha-942958
	I1002 20:37:17.729131  511473 main.go:141] libmachine: (ha-942958) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:26:f7", ip: ""} in network mk-ha-942958: {Iface:virbr1 ExpiryTime:2025-10-02 21:31:15 +0000 UTC Type:0 Mac:52:54:00:cf:26:f7 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-942958 Clientid:01:52:54:00:cf:26:f7}
	I1002 20:37:17.729161  511473 main.go:141] libmachine: (ha-942958) DBG | domain ha-942958 has defined IP address 192.168.39.172 and MAC address 52:54:00:cf:26:f7 in network mk-ha-942958
	I1002 20:37:17.729331  511473 main.go:141] libmachine: (ha-942958) Calling .GetSSHPort
	I1002 20:37:17.729509  511473 main.go:141] libmachine: (ha-942958) Calling .GetSSHKeyPath
	I1002 20:37:17.729668  511473 main.go:141] libmachine: (ha-942958) Calling .GetSSHUsername
	I1002 20:37:17.729845  511473 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/ha-942958/id_rsa Username:docker}
	I1002 20:37:17.807980  511473 ssh_runner.go:195] Run: systemctl --version
	I1002 20:37:17.814270  511473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:37:17.832235  511473 kubeconfig.go:125] found "ha-942958" server: "https://192.168.39.254:8443"
	I1002 20:37:17.832283  511473 api_server.go:166] Checking apiserver status ...
	I1002 20:37:17.832336  511473 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:37:17.851086  511473 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1389/cgroup
	W1002 20:37:17.863150  511473 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1389/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:37:17.863216  511473 ssh_runner.go:195] Run: ls
	I1002 20:37:17.868185  511473 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1002 20:37:17.874567  511473 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1002 20:37:17.874588  511473 status.go:463] ha-942958 apiserver status = Running (err=<nil>)
	I1002 20:37:17.874603  511473 status.go:176] ha-942958 status: &{Name:ha-942958 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:37:17.874621  511473 status.go:174] checking status of ha-942958-m02 ...
	I1002 20:37:17.874965  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:17.875007  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:17.889197  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1002 20:37:17.889614  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:17.890099  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:17.890125  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:17.890472  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:17.890716  511473 main.go:141] libmachine: (ha-942958-m02) Calling .GetState
	I1002 20:37:17.892423  511473 status.go:371] ha-942958-m02 host status = "Stopped" (err=<nil>)
	I1002 20:37:17.892439  511473 status.go:384] host is not running, skipping remaining checks
	I1002 20:37:17.892447  511473 status.go:176] ha-942958-m02 status: &{Name:ha-942958-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:37:17.892469  511473 status.go:174] checking status of ha-942958-m03 ...
	I1002 20:37:17.892779  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:17.892843  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:17.906992  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I1002 20:37:17.907403  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:17.907835  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:17.907859  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:17.908166  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:17.908363  511473 main.go:141] libmachine: (ha-942958-m03) Calling .GetState
	I1002 20:37:17.909904  511473 status.go:371] ha-942958-m03 host status = "Running" (err=<nil>)
	I1002 20:37:17.909922  511473 host.go:66] Checking if "ha-942958-m03" exists ...
	I1002 20:37:17.910225  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:17.910258  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:17.922993  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35649
	I1002 20:37:17.923379  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:17.923804  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:17.923824  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:17.924143  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:17.924329  511473 main.go:141] libmachine: (ha-942958-m03) Calling .GetIP
	I1002 20:37:17.927069  511473 main.go:141] libmachine: (ha-942958-m03) DBG | domain ha-942958-m03 has defined MAC address 52:54:00:b0:dc:51 in network mk-ha-942958
	I1002 20:37:17.927523  511473 main.go:141] libmachine: (ha-942958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:dc:51", ip: ""} in network mk-ha-942958: {Iface:virbr1 ExpiryTime:2025-10-02 21:33:40 +0000 UTC Type:0 Mac:52:54:00:b0:dc:51 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-942958-m03 Clientid:01:52:54:00:b0:dc:51}
	I1002 20:37:17.927551  511473 main.go:141] libmachine: (ha-942958-m03) DBG | domain ha-942958-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:b0:dc:51 in network mk-ha-942958
	I1002 20:37:17.927693  511473 host.go:66] Checking if "ha-942958-m03" exists ...
	I1002 20:37:17.928017  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:17.928060  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:17.941075  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32801
	I1002 20:37:17.941494  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:17.941893  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:17.941931  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:17.942227  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:17.942385  511473 main.go:141] libmachine: (ha-942958-m03) Calling .DriverName
	I1002 20:37:17.942576  511473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:37:17.942609  511473 main.go:141] libmachine: (ha-942958-m03) Calling .GetSSHHostname
	I1002 20:37:17.945494  511473 main.go:141] libmachine: (ha-942958-m03) DBG | domain ha-942958-m03 has defined MAC address 52:54:00:b0:dc:51 in network mk-ha-942958
	I1002 20:37:17.946002  511473 main.go:141] libmachine: (ha-942958-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:dc:51", ip: ""} in network mk-ha-942958: {Iface:virbr1 ExpiryTime:2025-10-02 21:33:40 +0000 UTC Type:0 Mac:52:54:00:b0:dc:51 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-942958-m03 Clientid:01:52:54:00:b0:dc:51}
	I1002 20:37:17.946070  511473 main.go:141] libmachine: (ha-942958-m03) DBG | domain ha-942958-m03 has defined IP address 192.168.39.209 and MAC address 52:54:00:b0:dc:51 in network mk-ha-942958
	I1002 20:37:17.946187  511473 main.go:141] libmachine: (ha-942958-m03) Calling .GetSSHPort
	I1002 20:37:17.946357  511473 main.go:141] libmachine: (ha-942958-m03) Calling .GetSSHKeyPath
	I1002 20:37:17.946499  511473 main.go:141] libmachine: (ha-942958-m03) Calling .GetSSHUsername
	I1002 20:37:17.946661  511473 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/ha-942958-m03/id_rsa Username:docker}
	I1002 20:37:18.032496  511473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:37:18.050614  511473 kubeconfig.go:125] found "ha-942958" server: "https://192.168.39.254:8443"
	I1002 20:37:18.050648  511473 api_server.go:166] Checking apiserver status ...
	I1002 20:37:18.050722  511473 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:37:18.071605  511473 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1794/cgroup
	W1002 20:37:18.083233  511473 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1794/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:37:18.083294  511473 ssh_runner.go:195] Run: ls
	I1002 20:37:18.088303  511473 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1002 20:37:18.093762  511473 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1002 20:37:18.093782  511473 status.go:463] ha-942958-m03 apiserver status = Running (err=<nil>)
	I1002 20:37:18.093790  511473 status.go:176] ha-942958-m03 status: &{Name:ha-942958-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:37:18.093805  511473 status.go:174] checking status of ha-942958-m04 ...
	I1002 20:37:18.094080  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:18.094114  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:18.107728  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44091
	I1002 20:37:18.108137  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:18.108639  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:18.108674  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:18.109038  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:18.109214  511473 main.go:141] libmachine: (ha-942958-m04) Calling .GetState
	I1002 20:37:18.111026  511473 status.go:371] ha-942958-m04 host status = "Running" (err=<nil>)
	I1002 20:37:18.111046  511473 host.go:66] Checking if "ha-942958-m04" exists ...
	I1002 20:37:18.111323  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:18.111382  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:18.124211  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I1002 20:37:18.124592  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:18.125036  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:18.125072  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:18.125483  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:18.125736  511473 main.go:141] libmachine: (ha-942958-m04) Calling .GetIP
	I1002 20:37:18.128296  511473 main.go:141] libmachine: (ha-942958-m04) DBG | domain ha-942958-m04 has defined MAC address 52:54:00:2d:8f:d6 in network mk-ha-942958
	I1002 20:37:18.128783  511473 main.go:141] libmachine: (ha-942958-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8f:d6", ip: ""} in network mk-ha-942958: {Iface:virbr1 ExpiryTime:2025-10-02 21:35:19 +0000 UTC Type:0 Mac:52:54:00:2d:8f:d6 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-942958-m04 Clientid:01:52:54:00:2d:8f:d6}
	I1002 20:37:18.128812  511473 main.go:141] libmachine: (ha-942958-m04) DBG | domain ha-942958-m04 has defined IP address 192.168.39.244 and MAC address 52:54:00:2d:8f:d6 in network mk-ha-942958
	I1002 20:37:18.128981  511473 host.go:66] Checking if "ha-942958-m04" exists ...
	I1002 20:37:18.129272  511473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:37:18.129324  511473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:37:18.142569  511473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41955
	I1002 20:37:18.142943  511473 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:37:18.143324  511473 main.go:141] libmachine: Using API Version  1
	I1002 20:37:18.143343  511473 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:37:18.143670  511473 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:37:18.143858  511473 main.go:141] libmachine: (ha-942958-m04) Calling .DriverName
	I1002 20:37:18.144034  511473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:37:18.144057  511473 main.go:141] libmachine: (ha-942958-m04) Calling .GetSSHHostname
	I1002 20:37:18.146929  511473 main.go:141] libmachine: (ha-942958-m04) DBG | domain ha-942958-m04 has defined MAC address 52:54:00:2d:8f:d6 in network mk-ha-942958
	I1002 20:37:18.147338  511473 main.go:141] libmachine: (ha-942958-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:8f:d6", ip: ""} in network mk-ha-942958: {Iface:virbr1 ExpiryTime:2025-10-02 21:35:19 +0000 UTC Type:0 Mac:52:54:00:2d:8f:d6 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-942958-m04 Clientid:01:52:54:00:2d:8f:d6}
	I1002 20:37:18.147360  511473 main.go:141] libmachine: (ha-942958-m04) DBG | domain ha-942958-m04 has defined IP address 192.168.39.244 and MAC address 52:54:00:2d:8f:d6 in network mk-ha-942958
	I1002 20:37:18.147540  511473 main.go:141] libmachine: (ha-942958-m04) Calling .GetSSHPort
	I1002 20:37:18.147715  511473 main.go:141] libmachine: (ha-942958-m04) Calling .GetSSHKeyPath
	I1002 20:37:18.147871  511473 main.go:141] libmachine: (ha-942958-m04) Calling .GetSSHUsername
	I1002 20:37:18.147984  511473 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/ha-942958-m04/id_rsa Username:docker}
	I1002 20:37:18.224634  511473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:37:18.239840  511473 status.go:176] ha-942958-m04 status: &{Name:ha-942958-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (75.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 node start m02 --alsologtostderr -v 5
E1002 20:37:49.989473  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 node start m02 --alsologtostderr -v 5: (33.786899177s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.002804187s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (358.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 stop --alsologtostderr -v 5
E1002 20:40:06.134758  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:40:33.831790  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 stop --alsologtostderr -v 5: (3m57.518001941s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 start --wait true --alsologtostderr -v 5
E1002 20:42:16.831665  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:43:39.906004  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 start --wait true --alsologtostderr -v 5: (2m0.927627733s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (358.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 node delete m03 --alsologtostderr -v 5: (17.814814997s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (253.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 stop --alsologtostderr -v 5
E1002 20:45:06.137412  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:47:16.831159  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 stop --alsologtostderr -v 5: (4m13.117202322s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5: exit status 7 (104.369467ms)

                                                
                                                
-- stdout --
	ha-942958
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-942958-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-942958-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:48:25.723112  515337 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:48:25.723385  515337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:25.723395  515337 out.go:374] Setting ErrFile to fd 2...
	I1002 20:48:25.723401  515337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:25.723606  515337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 20:48:25.723816  515337 out.go:368] Setting JSON to false
	I1002 20:48:25.723854  515337 mustload.go:65] Loading cluster: ha-942958
	I1002 20:48:25.723952  515337 notify.go:220] Checking for updates...
	I1002 20:48:25.724259  515337 config.go:182] Loaded profile config "ha-942958": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:25.724277  515337 status.go:174] checking status of ha-942958 ...
	I1002 20:48:25.724721  515337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:48:25.724777  515337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:48:25.738408  515337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I1002 20:48:25.738862  515337 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:48:25.739329  515337 main.go:141] libmachine: Using API Version  1
	I1002 20:48:25.739348  515337 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:48:25.739700  515337 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:48:25.739909  515337 main.go:141] libmachine: (ha-942958) Calling .GetState
	I1002 20:48:25.741597  515337 status.go:371] ha-942958 host status = "Stopped" (err=<nil>)
	I1002 20:48:25.741612  515337 status.go:384] host is not running, skipping remaining checks
	I1002 20:48:25.741619  515337 status.go:176] ha-942958 status: &{Name:ha-942958 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:48:25.741645  515337 status.go:174] checking status of ha-942958-m02 ...
	I1002 20:48:25.741937  515337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:48:25.741969  515337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:48:25.758632  515337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I1002 20:48:25.758999  515337 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:48:25.759447  515337 main.go:141] libmachine: Using API Version  1
	I1002 20:48:25.759477  515337 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:48:25.759872  515337 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:48:25.760081  515337 main.go:141] libmachine: (ha-942958-m02) Calling .GetState
	I1002 20:48:25.761639  515337 status.go:371] ha-942958-m02 host status = "Stopped" (err=<nil>)
	I1002 20:48:25.761655  515337 status.go:384] host is not running, skipping remaining checks
	I1002 20:48:25.761665  515337 status.go:176] ha-942958-m02 status: &{Name:ha-942958-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:48:25.761690  515337 status.go:174] checking status of ha-942958-m04 ...
	I1002 20:48:25.762011  515337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:48:25.762075  515337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:48:25.774837  515337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I1002 20:48:25.775170  515337 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:48:25.775602  515337 main.go:141] libmachine: Using API Version  1
	I1002 20:48:25.775624  515337 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:48:25.775935  515337 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:48:25.776131  515337 main.go:141] libmachine: (ha-942958-m04) Calling .GetState
	I1002 20:48:25.777907  515337 status.go:371] ha-942958-m04 host status = "Stopped" (err=<nil>)
	I1002 20:48:25.777921  515337 status.go:384] host is not running, skipping remaining checks
	I1002 20:48:25.777926  515337 status.go:176] ha-942958-m04 status: &{Name:ha-942958-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (253.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (114.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:50:06.129563  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m53.325482928s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (114.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 node add --control-plane --alsologtostderr -v 5
E1002 20:51:29.193636  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-942958 node add --control-plane --alsologtostderr -v 5: (1m11.300775349s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-942958 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-829816 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:52:16.832004  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-829816 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.349079894s)
--- PASS: TestJSONOutput/start/Command (78.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-829816 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-829816 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-829816 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-829816 --output=json --user=testUser: (6.973592441s)
--- PASS: TestJSONOutput/stop/Command (6.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-762197 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-762197 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (61.412961ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"23d82149-e78f-42a9-82ab-c14a5882536c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-762197] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8122a847-2e91-42c0-88a9-df4c83ffec23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21682"}}
	{"specversion":"1.0","id":"2fff7fa1-aa78-44b5-ac04-333eac8ef8be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d7d262e7-5970-444b-bebb-a4401f141a53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig"}}
	{"specversion":"1.0","id":"6d28c2f6-3c14-4182-bf4d-50c30348e692","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube"}}
	{"specversion":"1.0","id":"61183cb0-80cc-4f06-a512-8c8548aa1e46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0e2e468b-6928-4c20-8b85-cdfd1fc3c5cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a70fdaa7-0b39-4b65-b9dd-8cb406b9cddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-762197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-762197
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.04s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-922361 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-922361 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (35.019315465s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-933585 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-933585 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.268255921s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-922361
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-933585
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-933585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-933585
helpers_test.go:175: Cleaning up "first-922361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-922361
--- PASS: TestMinikubeProfile (75.04s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-056765 --memory=3072 --mount-string /tmp/TestMountStartserial4183711444/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-056765 --memory=3072 --mount-string /tmp/TestMountStartserial4183711444/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.541924276s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-056765 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-056765 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-072568 --memory=3072 --mount-string /tmp/TestMountStartserial4183711444/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-072568 --memory=3072 --mount-string /tmp/TestMountStartserial4183711444/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.454126449s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-072568 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-072568 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-056765 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-072568 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-072568 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-072568
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-072568: (1.230210331s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.34s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-072568
E1002 20:55:06.128918  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-072568: (18.336264443s)
--- PASS: TestMountStart/serial/RestartStopped (19.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-072568 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-072568 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-091885 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-091885 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.991908475s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-091885 -- rollout status deployment/busybox: (4.419905575s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-nmbxc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-rc56h -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-nmbxc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-rc56h -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-nmbxc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-rc56h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-nmbxc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-nmbxc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-rc56h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-091885 -- exec busybox-7b57f96db7-rc56h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-091885 -v=5 --alsologtostderr
E1002 20:57:16.832210  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-091885 -v=5 --alsologtostderr: (43.451265733s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-091885 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp testdata/cp-test.txt multinode-091885:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp multinode-091885:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile450352875/001/cp-test_multinode-091885.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp multinode-091885:/home/docker/cp-test.txt multinode-091885-m02:/home/docker/cp-test_multinode-091885_multinode-091885-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m02 "sudo cat /home/docker/cp-test_multinode-091885_multinode-091885-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp multinode-091885:/home/docker/cp-test.txt multinode-091885-m03:/home/docker/cp-test_multinode-091885_multinode-091885-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m03 "sudo cat /home/docker/cp-test_multinode-091885_multinode-091885-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp testdata/cp-test.txt multinode-091885-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp multinode-091885-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile450352875/001/cp-test_multinode-091885-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp multinode-091885-m02:/home/docker/cp-test.txt multinode-091885:/home/docker/cp-test_multinode-091885-m02_multinode-091885.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885 "sudo cat /home/docker/cp-test_multinode-091885-m02_multinode-091885.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp multinode-091885-m02:/home/docker/cp-test.txt multinode-091885-m03:/home/docker/cp-test_multinode-091885-m02_multinode-091885-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m03 "sudo cat /home/docker/cp-test_multinode-091885-m02_multinode-091885-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp testdata/cp-test.txt multinode-091885-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp multinode-091885-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile450352875/001/cp-test_multinode-091885-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp multinode-091885-m03:/home/docker/cp-test.txt multinode-091885:/home/docker/cp-test_multinode-091885-m03_multinode-091885.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885 "sudo cat /home/docker/cp-test_multinode-091885-m03_multinode-091885.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 cp multinode-091885-m03:/home/docker/cp-test.txt multinode-091885-m02:/home/docker/cp-test_multinode-091885-m03_multinode-091885-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 ssh -n multinode-091885-m02 "sudo cat /home/docker/cp-test_multinode-091885-m03_multinode-091885-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-091885 node stop m03: (1.540370136s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-091885 status: exit status 7 (421.0298ms)

                                                
                                                
-- stdout --
	multinode-091885
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-091885-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-091885-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-091885 status --alsologtostderr: exit status 7 (429.144485ms)

                                                
                                                
-- stdout --
	multinode-091885
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-091885-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-091885-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:58:01.649350  522952 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:58:01.649586  522952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:01.649594  522952 out.go:374] Setting ErrFile to fd 2...
	I1002 20:58:01.649599  522952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:01.649818  522952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 20:58:01.650050  522952 out.go:368] Setting JSON to false
	I1002 20:58:01.650094  522952 mustload.go:65] Loading cluster: multinode-091885
	I1002 20:58:01.650211  522952 notify.go:220] Checking for updates...
	I1002 20:58:01.650561  522952 config.go:182] Loaded profile config "multinode-091885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:58:01.650578  522952 status.go:174] checking status of multinode-091885 ...
	I1002 20:58:01.651089  522952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:58:01.651139  522952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:58:01.667270  522952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45921
	I1002 20:58:01.667770  522952 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:58:01.668554  522952 main.go:141] libmachine: Using API Version  1
	I1002 20:58:01.668591  522952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:58:01.668974  522952 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:58:01.669181  522952 main.go:141] libmachine: (multinode-091885) Calling .GetState
	I1002 20:58:01.670893  522952 status.go:371] multinode-091885 host status = "Running" (err=<nil>)
	I1002 20:58:01.670908  522952 host.go:66] Checking if "multinode-091885" exists ...
	I1002 20:58:01.671244  522952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:58:01.671303  522952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:58:01.685152  522952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33795
	I1002 20:58:01.685593  522952 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:58:01.686027  522952 main.go:141] libmachine: Using API Version  1
	I1002 20:58:01.686046  522952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:58:01.686334  522952 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:58:01.686534  522952 main.go:141] libmachine: (multinode-091885) Calling .GetIP
	I1002 20:58:01.689347  522952 main.go:141] libmachine: (multinode-091885) DBG | domain multinode-091885 has defined MAC address 52:54:00:5d:8c:9c in network mk-multinode-091885
	I1002 20:58:01.689820  522952 main.go:141] libmachine: (multinode-091885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8c:9c", ip: ""} in network mk-multinode-091885: {Iface:virbr1 ExpiryTime:2025-10-02 21:55:40 +0000 UTC Type:0 Mac:52:54:00:5d:8c:9c Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-091885 Clientid:01:52:54:00:5d:8c:9c}
	I1002 20:58:01.689858  522952 main.go:141] libmachine: (multinode-091885) DBG | domain multinode-091885 has defined IP address 192.168.39.201 and MAC address 52:54:00:5d:8c:9c in network mk-multinode-091885
	I1002 20:58:01.690019  522952 host.go:66] Checking if "multinode-091885" exists ...
	I1002 20:58:01.690320  522952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:58:01.690363  522952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:58:01.704672  522952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I1002 20:58:01.705213  522952 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:58:01.705694  522952 main.go:141] libmachine: Using API Version  1
	I1002 20:58:01.705743  522952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:58:01.706060  522952 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:58:01.706269  522952 main.go:141] libmachine: (multinode-091885) Calling .DriverName
	I1002 20:58:01.706498  522952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:58:01.706522  522952 main.go:141] libmachine: (multinode-091885) Calling .GetSSHHostname
	I1002 20:58:01.709972  522952 main.go:141] libmachine: (multinode-091885) DBG | domain multinode-091885 has defined MAC address 52:54:00:5d:8c:9c in network mk-multinode-091885
	I1002 20:58:01.710487  522952 main.go:141] libmachine: (multinode-091885) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:8c:9c", ip: ""} in network mk-multinode-091885: {Iface:virbr1 ExpiryTime:2025-10-02 21:55:40 +0000 UTC Type:0 Mac:52:54:00:5d:8c:9c Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:multinode-091885 Clientid:01:52:54:00:5d:8c:9c}
	I1002 20:58:01.710517  522952 main.go:141] libmachine: (multinode-091885) DBG | domain multinode-091885 has defined IP address 192.168.39.201 and MAC address 52:54:00:5d:8c:9c in network mk-multinode-091885
	I1002 20:58:01.710773  522952 main.go:141] libmachine: (multinode-091885) Calling .GetSSHPort
	I1002 20:58:01.710966  522952 main.go:141] libmachine: (multinode-091885) Calling .GetSSHKeyPath
	I1002 20:58:01.711129  522952 main.go:141] libmachine: (multinode-091885) Calling .GetSSHUsername
	I1002 20:58:01.711304  522952 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/multinode-091885/id_rsa Username:docker}
	I1002 20:58:01.790858  522952 ssh_runner.go:195] Run: systemctl --version
	I1002 20:58:01.797236  522952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:58:01.816765  522952 kubeconfig.go:125] found "multinode-091885" server: "https://192.168.39.201:8443"
	I1002 20:58:01.816805  522952 api_server.go:166] Checking apiserver status ...
	I1002 20:58:01.816840  522952 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:01.836270  522952 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1358/cgroup
	W1002 20:58:01.847462  522952 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1358/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:58:01.847517  522952 ssh_runner.go:195] Run: ls
	I1002 20:58:01.852242  522952 api_server.go:253] Checking apiserver healthz at https://192.168.39.201:8443/healthz ...
	I1002 20:58:01.856931  522952 api_server.go:279] https://192.168.39.201:8443/healthz returned 200:
	ok
	I1002 20:58:01.856951  522952 status.go:463] multinode-091885 apiserver status = Running (err=<nil>)
	I1002 20:58:01.856961  522952 status.go:176] multinode-091885 status: &{Name:multinode-091885 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:58:01.856980  522952 status.go:174] checking status of multinode-091885-m02 ...
	I1002 20:58:01.857271  522952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:58:01.857310  522952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:58:01.873207  522952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36693
	I1002 20:58:01.873725  522952 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:58:01.874231  522952 main.go:141] libmachine: Using API Version  1
	I1002 20:58:01.874258  522952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:58:01.874667  522952 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:58:01.874884  522952 main.go:141] libmachine: (multinode-091885-m02) Calling .GetState
	I1002 20:58:01.876514  522952 status.go:371] multinode-091885-m02 host status = "Running" (err=<nil>)
	I1002 20:58:01.876529  522952 host.go:66] Checking if "multinode-091885-m02" exists ...
	I1002 20:58:01.876811  522952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:58:01.876852  522952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:58:01.891115  522952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44823
	I1002 20:58:01.891533  522952 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:58:01.891985  522952 main.go:141] libmachine: Using API Version  1
	I1002 20:58:01.892005  522952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:58:01.892366  522952 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:58:01.892552  522952 main.go:141] libmachine: (multinode-091885-m02) Calling .GetIP
	I1002 20:58:01.895242  522952 main.go:141] libmachine: (multinode-091885-m02) DBG | domain multinode-091885-m02 has defined MAC address 52:54:00:43:87:c2 in network mk-multinode-091885
	I1002 20:58:01.895796  522952 main.go:141] libmachine: (multinode-091885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:87:c2", ip: ""} in network mk-multinode-091885: {Iface:virbr1 ExpiryTime:2025-10-02 21:56:32 +0000 UTC Type:0 Mac:52:54:00:43:87:c2 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-091885-m02 Clientid:01:52:54:00:43:87:c2}
	I1002 20:58:01.895822  522952 main.go:141] libmachine: (multinode-091885-m02) DBG | domain multinode-091885-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:43:87:c2 in network mk-multinode-091885
	I1002 20:58:01.895986  522952 host.go:66] Checking if "multinode-091885-m02" exists ...
	I1002 20:58:01.896321  522952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:58:01.896371  522952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:58:01.909402  522952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42305
	I1002 20:58:01.909846  522952 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:58:01.910259  522952 main.go:141] libmachine: Using API Version  1
	I1002 20:58:01.910278  522952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:58:01.910623  522952 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:58:01.910826  522952 main.go:141] libmachine: (multinode-091885-m02) Calling .DriverName
	I1002 20:58:01.911013  522952 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:58:01.911037  522952 main.go:141] libmachine: (multinode-091885-m02) Calling .GetSSHHostname
	I1002 20:58:01.913566  522952 main.go:141] libmachine: (multinode-091885-m02) DBG | domain multinode-091885-m02 has defined MAC address 52:54:00:43:87:c2 in network mk-multinode-091885
	I1002 20:58:01.913979  522952 main.go:141] libmachine: (multinode-091885-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:87:c2", ip: ""} in network mk-multinode-091885: {Iface:virbr1 ExpiryTime:2025-10-02 21:56:32 +0000 UTC Type:0 Mac:52:54:00:43:87:c2 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-091885-m02 Clientid:01:52:54:00:43:87:c2}
	I1002 20:58:01.913999  522952 main.go:141] libmachine: (multinode-091885-m02) DBG | domain multinode-091885-m02 has defined IP address 192.168.39.222 and MAC address 52:54:00:43:87:c2 in network mk-multinode-091885
	I1002 20:58:01.914144  522952 main.go:141] libmachine: (multinode-091885-m02) Calling .GetSSHPort
	I1002 20:58:01.914309  522952 main.go:141] libmachine: (multinode-091885-m02) Calling .GetSSHKeyPath
	I1002 20:58:01.914471  522952 main.go:141] libmachine: (multinode-091885-m02) Calling .GetSSHUsername
	I1002 20:58:01.914591  522952 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21682-492630/.minikube/machines/multinode-091885-m02/id_rsa Username:docker}
	I1002 20:58:01.995972  522952 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:58:02.011619  522952 status.go:176] multinode-091885-m02 status: &{Name:multinode-091885-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:58:02.011651  522952 status.go:174] checking status of multinode-091885-m03 ...
	I1002 20:58:02.011983  522952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:58:02.012028  522952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:58:02.025997  522952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I1002 20:58:02.026491  522952 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:58:02.026948  522952 main.go:141] libmachine: Using API Version  1
	I1002 20:58:02.026972  522952 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:58:02.027361  522952 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:58:02.027546  522952 main.go:141] libmachine: (multinode-091885-m03) Calling .GetState
	I1002 20:58:02.029350  522952 status.go:371] multinode-091885-m03 host status = "Stopped" (err=<nil>)
	I1002 20:58:02.029383  522952 status.go:384] host is not running, skipping remaining checks
	I1002 20:58:02.029390  522952 status.go:176] multinode-091885-m03 status: &{Name:multinode-091885-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-091885 node start m03 -v=5 --alsologtostderr: (37.19211417s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (336.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-091885
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-091885
E1002 21:00:06.129168  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:00:19.910039  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-091885: (2m53.18884873s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-091885 --wait=true -v=5 --alsologtostderr
E1002 21:02:16.831311  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-091885 --wait=true -v=5 --alsologtostderr: (2m43.199493961s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-091885
--- PASS: TestMultiNode/serial/RestartKeepsNodes (336.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-091885 node delete m03: (2.259009831s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (167.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 stop
E1002 21:05:06.131435  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-091885 stop: (2m47.294575895s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-091885 status: exit status 7 (95.237095ms)

                                                
                                                
-- stdout --
	multinode-091885
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-091885-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-091885 status --alsologtostderr: exit status 7 (83.392515ms)

                                                
                                                
-- stdout --
	multinode-091885
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-091885-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:07:06.555851  525858 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:06.556150  525858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.556161  525858 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:06.556167  525858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.556371  525858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 21:07:06.556555  525858 out.go:368] Setting JSON to false
	I1002 21:07:06.556592  525858 mustload.go:65] Loading cluster: multinode-091885
	I1002 21:07:06.556690  525858 notify.go:220] Checking for updates...
	I1002 21:07:06.557023  525858 config.go:182] Loaded profile config "multinode-091885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:06.557044  525858 status.go:174] checking status of multinode-091885 ...
	I1002 21:07:06.557569  525858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:07:06.557615  525858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:07:06.571010  525858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I1002 21:07:06.571421  525858 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:07:06.571917  525858 main.go:141] libmachine: Using API Version  1
	I1002 21:07:06.571985  525858 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:07:06.572429  525858 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:07:06.572646  525858 main.go:141] libmachine: (multinode-091885) Calling .GetState
	I1002 21:07:06.574600  525858 status.go:371] multinode-091885 host status = "Stopped" (err=<nil>)
	I1002 21:07:06.574622  525858 status.go:384] host is not running, skipping remaining checks
	I1002 21:07:06.574629  525858 status.go:176] multinode-091885 status: &{Name:multinode-091885 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:07:06.574674  525858 status.go:174] checking status of multinode-091885-m02 ...
	I1002 21:07:06.575018  525858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 21:07:06.575056  525858 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 21:07:06.588081  525858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I1002 21:07:06.588480  525858 main.go:141] libmachine: () Calling .GetVersion
	I1002 21:07:06.588890  525858 main.go:141] libmachine: Using API Version  1
	I1002 21:07:06.588910  525858 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 21:07:06.589215  525858 main.go:141] libmachine: () Calling .GetMachineName
	I1002 21:07:06.589399  525858 main.go:141] libmachine: (multinode-091885-m02) Calling .GetState
	I1002 21:07:06.590919  525858 status.go:371] multinode-091885-m02 host status = "Stopped" (err=<nil>)
	I1002 21:07:06.590936  525858 status.go:384] host is not running, skipping remaining checks
	I1002 21:07:06.590943  525858 status.go:176] multinode-091885-m02 status: &{Name:multinode-091885-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (167.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (85.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-091885 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 21:07:16.831220  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:08:09.195870  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-091885 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.722724205s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-091885 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (85.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-091885
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-091885-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-091885-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (69.351489ms)

                                                
                                                
-- stdout --
	* [multinode-091885-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-091885-m02' is duplicated with machine name 'multinode-091885-m02' in profile 'multinode-091885'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-091885-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-091885-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.722146026s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-091885
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-091885: exit status 80 (221.044514ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-091885 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-091885-m03 already exists in multinode-091885-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-091885-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.90s)

                                                
                                    
x
+
TestScheduledStopUnix (108.5s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-398476 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 21:12:16.831841  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-398476 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.809931949s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-398476 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-398476 -n scheduled-stop-398476
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-398476 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 21:12:29.254691  497569 retry.go:31] will retry after 59.616µs: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.255897  497569 retry.go:31] will retry after 183.244µs: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.257025  497569 retry.go:31] will retry after 157.222µs: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.258180  497569 retry.go:31] will retry after 483.81µs: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.259329  497569 retry.go:31] will retry after 639.083µs: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.260476  497569 retry.go:31] will retry after 1.076993ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.262661  497569 retry.go:31] will retry after 1.250824ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.264850  497569 retry.go:31] will retry after 1.148196ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.267049  497569 retry.go:31] will retry after 3.401689ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.271247  497569 retry.go:31] will retry after 2.197453ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.274426  497569 retry.go:31] will retry after 3.590954ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.278637  497569 retry.go:31] will retry after 10.407774ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.289831  497569 retry.go:31] will retry after 15.021913ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.305058  497569 retry.go:31] will retry after 24.197394ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
I1002 21:12:29.329488  497569 retry.go:31] will retry after 41.124983ms: open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/scheduled-stop-398476/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-398476 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-398476 -n scheduled-stop-398476
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-398476
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-398476 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-398476
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-398476: exit status 7 (67.926696ms)

                                                
                                                
-- stdout --
	scheduled-stop-398476
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-398476 -n scheduled-stop-398476
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-398476 -n scheduled-stop-398476: exit status 7 (65.731712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-398476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-398476
--- PASS: TestScheduledStopUnix (108.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (151.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3999371162 start -p running-upgrade-702829 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3999371162 start -p running-upgrade-702829 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m35.835650525s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-702829 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-702829 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.587079025s)
helpers_test.go:175: Cleaning up "running-upgrade-702829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-702829
--- PASS: TestRunningBinaryUpgrade (151.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (133.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.542160111s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-238376
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-238376: (1.922191885s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-238376 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-238376 status --format={{.Host}}: exit status 7 (75.731996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 21:17:16.831818  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m4.441351842s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-238376 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (90.85132ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-238376] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-238376
	    minikube start -p kubernetes-upgrade-238376 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2383762 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-238376 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-238376 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.643279319s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-238376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-238376
--- PASS: TestKubernetesUpgrade (133.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685644 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-685644 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (72.812715ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-685644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (81.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685644 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685644 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.81030486s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-685644 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (81.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (49.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685644 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 21:15:06.129275  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685644 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (48.15835687s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-685644 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-685644 status -o json: exit status 2 (239.648551ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-685644","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-685644
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (49.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685644 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685644 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.749088604s)
--- PASS: TestNoKubernetes/serial/Start (44.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-959487 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-959487 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (134.51812ms)

                                                
                                                
-- stdout --
	* [false-959487] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:16:15.379581  532599 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:16:15.379947  532599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:16:15.379963  532599 out.go:374] Setting ErrFile to fd 2...
	I1002 21:16:15.379970  532599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:16:15.380275  532599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-492630/.minikube/bin
	I1002 21:16:15.380975  532599 out.go:368] Setting JSON to false
	I1002 21:16:15.382201  532599 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7110,"bootTime":1759432665,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:16:15.382319  532599 start.go:140] virtualization: kvm guest
	I1002 21:16:15.384409  532599 out.go:179] * [false-959487] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:16:15.385732  532599 notify.go:220] Checking for updates...
	I1002 21:16:15.385833  532599 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:16:15.386867  532599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:16:15.387869  532599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-492630/kubeconfig
	I1002 21:16:15.388873  532599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-492630/.minikube
	I1002 21:16:15.389900  532599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:16:15.393732  532599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:16:15.395229  532599 config.go:182] Loaded profile config "NoKubernetes-685644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1002 21:16:15.395392  532599 config.go:182] Loaded profile config "cert-expiration-852898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:16:15.395536  532599 config.go:182] Loaded profile config "cert-options-664739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:16:15.395696  532599 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:16:15.439129  532599 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 21:16:15.441430  532599 start.go:304] selected driver: kvm2
	I1002 21:16:15.441461  532599 start.go:924] validating driver "kvm2" against <nil>
	I1002 21:16:15.441478  532599 start.go:935] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:16:15.443541  532599 out.go:203] 
	W1002 21:16:15.444549  532599 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 21:16:15.445399  532599 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-959487 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-959487" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:15:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.109:8443
name: cert-expiration-852898
contexts:
- context:
cluster: cert-expiration-852898
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:15:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-852898
name: cert-expiration-852898
current-context: ""
kind: Config
users:
- name: cert-expiration-852898
user:
client-certificate: /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/cert-expiration-852898/client.crt
client-key: /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/cert-expiration-852898/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-959487

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-959487"

                                                
                                                
----------------------- debugLogs end: false-959487 [took: 3.338245776s] --------------------------------
helpers_test.go:175: Cleaning up "false-959487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-959487
--- PASS: TestNetworkPlugins/group/false (3.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-685644 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-685644 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.750729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
I1002 21:16:36.033875  497569 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 21:16:36.034042  497569 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1172047300/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1002 21:16:36.071541  497569 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1172047300/001/docker-machine-driver-kvm2 version is 1.1.1
W1002 21:16:36.071588  497569 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1002 21:16:36.071767  497569 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1002 21:16:36.071822  497569 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1172047300/001/docker-machine-driver-kvm2
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
I1002 21:16:36.746950  497569 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1172047300/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1002 21:16:36.764018  497569 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1172047300/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-685644
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-685644: (1.370611109s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (31.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-685644 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-685644 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (31.906872508s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (31.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.794621336 start -p stopped-upgrade-391687 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 21:16:59.911969  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.794621336 start -p stopped-upgrade-391687 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.465043053s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.794621336 -p stopped-upgrade-391687 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.794621336 -p stopped-upgrade-391687 stop: (1.656190832s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-391687 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-391687 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.869065453s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-685644 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-685644 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.791362ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (95.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-128856 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-128856 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m35.041693421s)
--- PASS: TestPause/serial/Start (95.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (55.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-166937 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-166937 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (55.0997164s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (55.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-391687
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-391687: (1.057097112s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-397715 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-397715 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m53.64498849s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-166937 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bcdd3f59-f70b-48c9-9827-d0ce12438d71] Pending
helpers_test.go:352: "busybox" [bcdd3f59-f70b-48c9-9827-d0ce12438d71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bcdd3f59-f70b-48c9-9827-d0ce12438d71] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004749439s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-166937 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-166937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-166937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.048620267s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-166937 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (82.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-166937 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-166937 --alsologtostderr -v=3: (1m22.560998313s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (82.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-296193 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-296193 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m20.68353548s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-088653 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1002 21:20:06.129010  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-088653 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m33.976904755s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-397715 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3e0e77e6-aca1-44a5-a7aa-7111ad34a62d] Pending
helpers_test.go:352: "busybox" [3e0e77e6-aca1-44a5-a7aa-7111ad34a62d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3e0e77e6-aca1-44a5-a7aa-7111ad34a62d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004443931s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-397715 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-397715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-397715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.130622408s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-397715 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (85.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-397715 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-397715 --alsologtostderr -v=3: (1m25.6218407s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (85.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166937 -n old-k8s-version-166937
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166937 -n old-k8s-version-166937: exit status 7 (71.162745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-166937 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-166937 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-166937 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (45.835732734s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166937 -n old-k8s-version-166937
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-296193 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [61f37bcf-cca5-4664-b8f0-cdd7888ed24a] Pending
helpers_test.go:352: "busybox" [61f37bcf-cca5-4664-b8f0-cdd7888ed24a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [61f37bcf-cca5-4664-b8f0-cdd7888ed24a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00346586s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-296193 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-296193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-296193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.477579446s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-296193 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (83.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-296193 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-296193 --alsologtostderr -v=3: (1m23.965432079s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (83.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-088653 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a54b0e43-e42a-46ca-9fa3-5d54bc6a3af3] Pending
helpers_test.go:352: "busybox" [a54b0e43-e42a-46ca-9fa3-5d54bc6a3af3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a54b0e43-e42a-46ca-9fa3-5d54bc6a3af3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003716279s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-088653 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-088653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-088653 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-088653 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-088653 --alsologtostderr -v=3: (1m27.882598546s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7g7mv" [d3e28052-0a36-4bf2-9301-00b411376e76] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7g7mv" [d3e28052-0a36-4bf2-9301-00b411376e76] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004950974s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7g7mv" [d3e28052-0a36-4bf2-9301-00b411376e76] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00360248s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-166937 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-166937 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-166937 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166937 -n old-k8s-version-166937
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166937 -n old-k8s-version-166937: exit status 2 (242.557687ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-166937 -n old-k8s-version-166937
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-166937 -n old-k8s-version-166937: exit status 2 (251.577304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-166937 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166937 -n old-k8s-version-166937
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-166937 -n old-k8s-version-166937
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-638437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1002 21:22:16.831086  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/addons-760875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-638437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (42.124554155s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397715 -n no-preload-397715
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397715 -n no-preload-397715: exit status 7 (70.167851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-397715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (67.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-397715 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-397715 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m7.596843208s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397715 -n no-preload-397715
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (67.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-638437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-638437 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.035339267s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-638437 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-638437 --alsologtostderr -v=3: (13.056109339s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-296193 -n embed-certs-296193
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-296193 -n embed-certs-296193: exit status 7 (65.334268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-296193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-296193 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-296193 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (47.557858057s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-296193 -n embed-certs-296193
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-638437 -n newest-cni-638437
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-638437 -n newest-cni-638437: exit status 7 (93.06112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-638437 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (42.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-638437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-638437 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (41.756328174s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-638437 -n newest-cni-638437
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (42.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-088653 -n default-k8s-diff-port-088653
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-088653 -n default-k8s-diff-port-088653: exit status 7 (97.920511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-088653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (67.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-088653 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-088653 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m6.979661751s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-088653 -n default-k8s-diff-port-088653
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (67.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2h2f9" [eb15f0a4-e2b9-4125-a63d-324b4a347a6c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2h2f9" [eb15f0a4-e2b9-4125-a63d-324b4a347a6c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.00449537s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2h2f9" [eb15f0a4-e2b9-4125-a63d-324b4a347a6c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004911592s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-397715 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-397715 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-397715 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-397715 --alsologtostderr -v=1: (1.054341729s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397715 -n no-preload-397715
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397715 -n no-preload-397715: exit status 2 (284.281203ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-397715 -n no-preload-397715
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-397715 -n no-preload-397715: exit status 2 (307.368863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-397715 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397715 -n no-preload-397715
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-397715 -n no-preload-397715
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4jcgj" [98e59ac9-9d37-4342-9667-a632d6465694] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4jcgj" [98e59ac9-9d37-4342-9667-a632d6465694] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.010791093s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.243494135s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-638437 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-638437 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-638437 -n newest-cni-638437
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-638437 -n newest-cni-638437: exit status 2 (276.315743ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-638437 -n newest-cni-638437
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-638437 -n newest-cni-638437: exit status 2 (266.251718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-638437 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-638437 -n newest-cni-638437
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-638437 -n newest-cni-638437
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4jcgj" [98e59ac9-9d37-4342-9667-a632d6465694] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004616155s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-296193 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.727396818s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-296193 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-296193 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-296193 --alsologtostderr -v=1: (1.465418075s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-296193 -n embed-certs-296193
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-296193 -n embed-certs-296193: exit status 2 (261.727785ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-296193 -n embed-certs-296193
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-296193 -n embed-certs-296193: exit status 2 (326.196232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-296193 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-296193 --alsologtostderr -v=1: (1.037174703s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-296193 -n embed-certs-296193
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-296193 -n embed-certs-296193
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.262169737s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7hpzq" [a7a9ba58-828b-4cfa-b994-55bc8796c1fe] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7hpzq" [a7a9ba58-828b-4cfa-b994-55bc8796c1fe] Running
E1002 21:24:33.033201  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:33.039637  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:33.051113  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:33.072656  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:33.114215  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:33.196586  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:33.359593  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:33.681696  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:34.323960  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.00414235s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7hpzq" [a7a9ba58-828b-4cfa-b994-55bc8796c1fe] Running
E1002 21:24:35.606096  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:38.167787  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.052610494s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-088653 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-088653 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-088653 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-088653 -n default-k8s-diff-port-088653
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-088653 -n default-k8s-diff-port-088653: exit status 2 (274.059783ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-088653 -n default-k8s-diff-port-088653
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-088653 -n default-k8s-diff-port-088653: exit status 2 (301.713222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-088653 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-088653 -n default-k8s-diff-port-088653
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-088653 -n default-k8s-diff-port-088653
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.08s)
E1002 21:26:42.015419  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/default-k8s-diff-port-088653/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:26:47.136878  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/default-k8s-diff-port-088653/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:26:57.379228  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/default-k8s-diff-port-088653/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:27:00.868723  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/no-preload-397715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 21:24:49.197144  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:53.535582  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:25:06.129260  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/functional-175435/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.475407855s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-959487 "pgrep -a kubelet"
I1002 21:25:11.676213  497569 config.go:182] Loaded profile config "auto-959487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-959487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4wb9s" [ca32228f-7919-4efa-94c9-097ea787054e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4wb9s" [ca32228f-7919-4efa-94c9-097ea787054e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005000018s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-lh97q" [a6d51e03-cf59-4dc8-b4a9-28794124f47c] Running
E1002 21:25:14.017818  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004894343s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-959487 "pgrep -a kubelet"
I1002 21:25:19.351773  497569 config.go:182] Loaded profile config "kindnet-959487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-959487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kfdfm" [138d3818-fdc1-4abd-ad74-68bbcc922909] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kfdfm" [138d3818-fdc1-4abd-ad74-68bbcc922909] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005221081s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-959487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-959487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 21:25:41.498560  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/no-preload-397715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:25:44.060486  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/no-preload-397715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.450516277s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-qrl6j" [bc7c4c7e-1b08-4f6e-9383-d64b693f2d75] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-qrl6j" [bc7c4c7e-1b08-4f6e-9383-d64b693f2d75] Running
E1002 21:25:49.182897  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/no-preload-397715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007843474s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.707156032s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-959487 "pgrep -a kubelet"
I1002 21:25:51.336600  497569 config.go:182] Loaded profile config "calico-959487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-959487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nc7vj" [f0786666-8b90-4a9e-b14d-c96a45f1194f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 21:25:54.979866  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/old-k8s-version-166937/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-nc7vj" [f0786666-8b90-4a9e-b14d-c96a45f1194f] Running
E1002 21:25:59.424964  497569 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/no-preload-397715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005214195s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-959487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-959487 "pgrep -a kubelet"
I1002 21:26:10.134919  497569 config.go:182] Loaded profile config "custom-flannel-959487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-959487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qqkd8" [63f6eba0-1140-450e-8187-8e80831ab489] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qqkd8" [63f6eba0-1140-450e-8187-8e80831ab489] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005335125s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-959487 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.486268125s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-959487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-959487 "pgrep -a kubelet"
I1002 21:27:05.001744  497569 config.go:182] Loaded profile config "enable-default-cni-959487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-959487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-whhzn" [6b7714bc-a451-4b56-acc0-d9cf04b92360] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-whhzn" [6b7714bc-a451-4b56-acc0-d9cf04b92360] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004541352s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-t8tzp" [7b858f35-710d-403a-994b-b3cf798aa8fb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003825833s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-959487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-959487 "pgrep -a kubelet"
I1002 21:27:19.240847  497569 config.go:182] Loaded profile config "flannel-959487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-959487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bvrz6" [37fce013-8b7d-41fb-9f2a-2acc3f5f8b71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bvrz6" [37fce013-8b7d-41fb-9f2a-2acc3f5f8b71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004981368s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-959487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-959487 "pgrep -a kubelet"
I1002 21:27:43.324115  497569 config.go:182] Loaded profile config "bridge-959487": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-959487 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2lnwm" [21516022-e0c1-469d-892a-8aeb74753006] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2lnwm" [21516022-e0c1-469d-892a-8aeb74753006] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003757068s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-959487 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-959487 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.03
140 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
144 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
264 TestStartStop/group/disable-driver-mounts 0.17
268 TestNetworkPlugins/group/kubenet 3.48
276 TestNetworkPlugins/group/cilium 5.22
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-760875 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-109435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-109435
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-959487 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-959487" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:15:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.109:8443
name: cert-expiration-852898
contexts:
- context:
cluster: cert-expiration-852898
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:15:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-852898
name: cert-expiration-852898
current-context: ""
kind: Config
users:
- name: cert-expiration-852898
user:
client-certificate: /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/cert-expiration-852898/client.crt
client-key: /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/cert-expiration-852898/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-959487

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-959487"

                                                
                                                
----------------------- debugLogs end: kubenet-959487 [took: 3.276570773s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-959487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-959487
--- SKIP: TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-959487 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-959487" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21682-492630/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:15:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.109:8443
name: cert-expiration-852898
contexts:
- context:
cluster: cert-expiration-852898
extensions:
- extension:
last-update: Thu, 02 Oct 2025 21:15:57 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-852898
name: cert-expiration-852898
current-context: ""
kind: Config
users:
- name: cert-expiration-852898
user:
client-certificate: /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/cert-expiration-852898/client.crt
client-key: /home/jenkins/minikube-integration/21682-492630/.minikube/profiles/cert-expiration-852898/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-959487

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-959487" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-959487"

                                                
                                                
----------------------- debugLogs end: cilium-959487 [took: 5.058134384s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-959487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-959487
--- SKIP: TestNetworkPlugins/group/cilium (5.22s)

                                                
                                    
Copied to clipboard