Test Report: KVM_Linux_crio 21683

                    
                      cf2611189ddf0f856b4ad9653dc441b770ddd00e:2025-10-02:41739
                    
                

Test fail (3/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 158.47
244 TestPreload 174.36
268 TestPause/serial/SecondStartNoReconfiguration 90.55
x
+
TestAddons/parallel/Ingress (158.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-355008 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-355008 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-355008 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d3520043-753a-461f-bc3a-d85b4271f2da] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [d3520043-753a-461f-bc3a-d85b4271f2da] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.052185566s
I1002 19:51:30.587624   13449 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-355008 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.523533205s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-355008 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.211
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-355008 -n addons-355008
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-355008 logs -n 25: (1.421928036s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-586534                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-586534 │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │ 02 Oct 25 19:47 UTC │
	│ start   │ --download-only -p binary-mirror-418628 --alsologtostderr --binary-mirror http://127.0.0.1:33167 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-418628 │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │                     │
	│ delete  │ -p binary-mirror-418628                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-418628 │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │ 02 Oct 25 19:47 UTC │
	│ addons  │ disable dashboard -p addons-355008                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │                     │
	│ addons  │ enable dashboard -p addons-355008                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │                     │
	│ start   │ -p addons-355008 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │ 02 Oct 25 19:50 UTC │
	│ addons  │ addons-355008 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:50 UTC │ 02 Oct 25 19:50 UTC │
	│ addons  │ addons-355008 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ enable headlamp -p addons-355008 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ addons-355008 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ ssh     │ addons-355008 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │                     │
	│ addons  │ addons-355008 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ ip      │ addons-355008 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ addons-355008 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ addons-355008 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ addons-355008 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-355008                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ addons-355008 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ addons-355008 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ ssh     │ addons-355008 ssh cat /opt/local-path-provisioner/pvc-a708a7f0-6298-4a5f-9828-b66cc225a095_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ addons-355008 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:52 UTC │
	│ addons  │ addons-355008 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:51 UTC │ 02 Oct 25 19:51 UTC │
	│ addons  │ addons-355008 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:52 UTC │ 02 Oct 25 19:52 UTC │
	│ addons  │ addons-355008 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:52 UTC │ 02 Oct 25 19:53 UTC │
	│ ip      │ addons-355008 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-355008        │ jenkins │ v1.37.0 │ 02 Oct 25 19:53 UTC │ 02 Oct 25 19:53 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 19:47:34
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:47:34.048743   14168 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:47:34.049005   14168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:47:34.049023   14168 out.go:374] Setting ErrFile to fd 2...
	I1002 19:47:34.049027   14168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:47:34.049289   14168 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 19:47:34.049907   14168 out.go:368] Setting JSON to false
	I1002 19:47:34.050761   14168 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1797,"bootTime":1759432657,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:47:34.050844   14168 start.go:140] virtualization: kvm guest
	I1002 19:47:34.052517   14168 out.go:179] * [addons-355008] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 19:47:34.053613   14168 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 19:47:34.053622   14168 notify.go:221] Checking for updates...
	I1002 19:47:34.055806   14168 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:47:34.057049   14168 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 19:47:34.058284   14168 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 19:47:34.059244   14168 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 19:47:34.060362   14168 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:47:34.061605   14168 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:47:34.091131   14168 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 19:47:34.092001   14168 start.go:306] selected driver: kvm2
	I1002 19:47:34.092015   14168 start.go:936] validating driver "kvm2" against <nil>
	I1002 19:47:34.092026   14168 start.go:947] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:47:34.092692   14168 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:47:34.092779   14168 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:47:34.105925   14168 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 19:47:34.105950   14168 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:47:34.120375   14168 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 19:47:34.120421   14168 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 19:47:34.120659   14168 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 19:47:34.120684   14168 cni.go:84] Creating CNI manager for ""
	I1002 19:47:34.120739   14168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 19:47:34.120752   14168 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 19:47:34.120806   14168 start.go:350] cluster config:
	{Name:addons-355008 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-355008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1002 19:47:34.120916   14168 iso.go:125] acquiring lock: {Name:mkabc2fb4ac96edf87725f05149cf44e9a15d593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:47:34.122478   14168 out.go:179] * Starting "addons-355008" primary control-plane node in "addons-355008" cluster
	I1002 19:47:34.123360   14168 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 19:47:34.123394   14168 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 19:47:34.123403   14168 cache.go:59] Caching tarball of preloaded images
	I1002 19:47:34.123497   14168 preload.go:233] Found /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 19:47:34.123507   14168 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 19:47:34.123791   14168 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/config.json ...
	I1002 19:47:34.123813   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/config.json: {Name:mk3b1614ba7277f405c182fb283685f844afe53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:34.123938   14168 start.go:361] acquireMachinesLock for addons-355008: {Name:mk83006c688982612686a8dbdd0b9c4ecd5d338c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 19:47:34.123995   14168 start.go:365] duration metric: took 44.87µs to acquireMachinesLock for "addons-355008"
	I1002 19:47:34.124022   14168 start.go:94] Provisioning new machine with config: &{Name:addons-355008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-355008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 19:47:34.124067   14168 start.go:126] createHost starting for "" (driver="kvm2")
	I1002 19:47:34.125427   14168 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1002 19:47:34.125544   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:47:34.125581   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:47:34.138440   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
	I1002 19:47:34.138855   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:47:34.139359   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:47:34.139382   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:47:34.139712   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:47:34.139899   14168 main.go:141] libmachine: (addons-355008) Calling .GetMachineName
	I1002 19:47:34.140048   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:47:34.140178   14168 start.go:160] libmachine.API.Create for "addons-355008" (driver="kvm2")
	I1002 19:47:34.140212   14168 client.go:168] LocalClient.Create starting
	I1002 19:47:34.140256   14168 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem
	I1002 19:47:34.632995   14168 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem
	I1002 19:47:35.056168   14168 main.go:141] libmachine: Running pre-create checks...
	I1002 19:47:35.056190   14168 main.go:141] libmachine: (addons-355008) Calling .PreCreateCheck
	I1002 19:47:35.056696   14168 main.go:141] libmachine: (addons-355008) Calling .GetConfigRaw
	I1002 19:47:35.057137   14168 main.go:141] libmachine: Creating machine...
	I1002 19:47:35.057152   14168 main.go:141] libmachine: (addons-355008) Calling .Create
	I1002 19:47:35.057334   14168 main.go:141] libmachine: (addons-355008) creating domain...
	I1002 19:47:35.057352   14168 main.go:141] libmachine: (addons-355008) creating network...
	I1002 19:47:35.058778   14168 main.go:141] libmachine: (addons-355008) DBG | found existing default network
	I1002 19:47:35.058936   14168 main.go:141] libmachine: (addons-355008) DBG | <network>
	I1002 19:47:35.058957   14168 main.go:141] libmachine: (addons-355008) DBG |   <name>default</name>
	I1002 19:47:35.058968   14168 main.go:141] libmachine: (addons-355008) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 19:47:35.058983   14168 main.go:141] libmachine: (addons-355008) DBG |   <forward mode='nat'>
	I1002 19:47:35.058993   14168 main.go:141] libmachine: (addons-355008) DBG |     <nat>
	I1002 19:47:35.059002   14168 main.go:141] libmachine: (addons-355008) DBG |       <port start='1024' end='65535'/>
	I1002 19:47:35.059013   14168 main.go:141] libmachine: (addons-355008) DBG |     </nat>
	I1002 19:47:35.059019   14168 main.go:141] libmachine: (addons-355008) DBG |   </forward>
	I1002 19:47:35.059029   14168 main.go:141] libmachine: (addons-355008) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 19:47:35.059040   14168 main.go:141] libmachine: (addons-355008) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 19:47:35.059050   14168 main.go:141] libmachine: (addons-355008) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 19:47:35.059060   14168 main.go:141] libmachine: (addons-355008) DBG |     <dhcp>
	I1002 19:47:35.059093   14168 main.go:141] libmachine: (addons-355008) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 19:47:35.059115   14168 main.go:141] libmachine: (addons-355008) DBG |     </dhcp>
	I1002 19:47:35.059147   14168 main.go:141] libmachine: (addons-355008) DBG |   </ip>
	I1002 19:47:35.059168   14168 main.go:141] libmachine: (addons-355008) DBG | </network>
	I1002 19:47:35.059180   14168 main.go:141] libmachine: (addons-355008) DBG | 
	I1002 19:47:35.059697   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:35.059538   14196 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123550}
	I1002 19:47:35.059758   14168 main.go:141] libmachine: (addons-355008) DBG | defining private network:
	I1002 19:47:35.059777   14168 main.go:141] libmachine: (addons-355008) DBG | 
	I1002 19:47:35.059796   14168 main.go:141] libmachine: (addons-355008) DBG | <network>
	I1002 19:47:35.059821   14168 main.go:141] libmachine: (addons-355008) DBG |   <name>mk-addons-355008</name>
	I1002 19:47:35.059834   14168 main.go:141] libmachine: (addons-355008) DBG |   <dns enable='no'/>
	I1002 19:47:35.059843   14168 main.go:141] libmachine: (addons-355008) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 19:47:35.059851   14168 main.go:141] libmachine: (addons-355008) DBG |     <dhcp>
	I1002 19:47:35.059858   14168 main.go:141] libmachine: (addons-355008) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 19:47:35.059866   14168 main.go:141] libmachine: (addons-355008) DBG |     </dhcp>
	I1002 19:47:35.059872   14168 main.go:141] libmachine: (addons-355008) DBG |   </ip>
	I1002 19:47:35.059881   14168 main.go:141] libmachine: (addons-355008) DBG | </network>
	I1002 19:47:35.059893   14168 main.go:141] libmachine: (addons-355008) DBG | 
	I1002 19:47:35.065987   14168 main.go:141] libmachine: (addons-355008) DBG | creating private network mk-addons-355008 192.168.39.0/24...
	I1002 19:47:35.136093   14168 main.go:141] libmachine: (addons-355008) DBG | private network mk-addons-355008 192.168.39.0/24 created
	I1002 19:47:35.136361   14168 main.go:141] libmachine: (addons-355008) DBG | <network>
	I1002 19:47:35.136388   14168 main.go:141] libmachine: (addons-355008) setting up store path in /home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008 ...
	I1002 19:47:35.136397   14168 main.go:141] libmachine: (addons-355008) DBG |   <name>mk-addons-355008</name>
	I1002 19:47:35.136407   14168 main.go:141] libmachine: (addons-355008) DBG |   <uuid>8e54c9c6-55c7-45a7-b57d-43df1b9bd57c</uuid>
	I1002 19:47:35.136419   14168 main.go:141] libmachine: (addons-355008) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1002 19:47:35.136428   14168 main.go:141] libmachine: (addons-355008) DBG |   <mac address='52:54:00:f2:04:6c'/>
	I1002 19:47:35.136437   14168 main.go:141] libmachine: (addons-355008) DBG |   <dns enable='no'/>
	I1002 19:47:35.136447   14168 main.go:141] libmachine: (addons-355008) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 19:47:35.136456   14168 main.go:141] libmachine: (addons-355008) DBG |     <dhcp>
	I1002 19:47:35.136468   14168 main.go:141] libmachine: (addons-355008) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 19:47:35.136482   14168 main.go:141] libmachine: (addons-355008) building disk image from file:///home/jenkins/minikube-integration/21683-9524/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 19:47:35.136491   14168 main.go:141] libmachine: (addons-355008) DBG |     </dhcp>
	I1002 19:47:35.136502   14168 main.go:141] libmachine: (addons-355008) DBG |   </ip>
	I1002 19:47:35.136511   14168 main.go:141] libmachine: (addons-355008) DBG | </network>
	I1002 19:47:35.136526   14168 main.go:141] libmachine: (addons-355008) DBG | 
	I1002 19:47:35.136540   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:35.136344   14196 common.go:147] Making disk image using store path: /home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 19:47:35.136560   14168 main.go:141] libmachine: (addons-355008) Downloading /home/jenkins/minikube-integration/21683-9524/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21683-9524/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1002 19:47:35.421535   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:35.421425   14196 common.go:154] Creating ssh key: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa...
	I1002 19:47:35.975150   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:35.974961   14196 common.go:160] Creating raw disk image: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/addons-355008.rawdisk...
	I1002 19:47:35.975190   14168 main.go:141] libmachine: (addons-355008) DBG | Writing magic tar header
	I1002 19:47:35.975215   14168 main.go:141] libmachine: (addons-355008) DBG | Writing SSH key tar header
	I1002 19:47:35.975243   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:35.975169   14196 common.go:174] Fixing permissions on /home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008 ...
	I1002 19:47:35.975345   14168 main.go:141] libmachine: (addons-355008) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008
	I1002 19:47:35.975369   14168 main.go:141] libmachine: (addons-355008) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524/.minikube/machines
	I1002 19:47:35.975383   14168 main.go:141] libmachine: (addons-355008) setting executable bit set on /home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008 (perms=drwx------)
	I1002 19:47:35.975393   14168 main.go:141] libmachine: (addons-355008) setting executable bit set on /home/jenkins/minikube-integration/21683-9524/.minikube/machines (perms=drwxr-xr-x)
	I1002 19:47:35.975400   14168 main.go:141] libmachine: (addons-355008) setting executable bit set on /home/jenkins/minikube-integration/21683-9524/.minikube (perms=drwxr-xr-x)
	I1002 19:47:35.975409   14168 main.go:141] libmachine: (addons-355008) setting executable bit set on /home/jenkins/minikube-integration/21683-9524 (perms=drwxrwxr-x)
	I1002 19:47:35.975415   14168 main.go:141] libmachine: (addons-355008) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 19:47:35.975421   14168 main.go:141] libmachine: (addons-355008) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 19:47:35.975434   14168 main.go:141] libmachine: (addons-355008) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524
	I1002 19:47:35.975444   14168 main.go:141] libmachine: (addons-355008) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 19:47:35.975454   14168 main.go:141] libmachine: (addons-355008) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 19:47:35.975467   14168 main.go:141] libmachine: (addons-355008) DBG | checking permissions on dir: /home/jenkins
	I1002 19:47:35.975474   14168 main.go:141] libmachine: (addons-355008) defining domain...
	I1002 19:47:35.975508   14168 main.go:141] libmachine: (addons-355008) DBG | checking permissions on dir: /home
	I1002 19:47:35.975524   14168 main.go:141] libmachine: (addons-355008) DBG | skipping /home - not owner
	I1002 19:47:35.977519   14168 main.go:141] libmachine: (addons-355008) defining domain using XML: 
	I1002 19:47:35.977548   14168 main.go:141] libmachine: (addons-355008) <domain type='kvm'>
	I1002 19:47:35.977556   14168 main.go:141] libmachine: (addons-355008)   <name>addons-355008</name>
	I1002 19:47:35.977560   14168 main.go:141] libmachine: (addons-355008)   <memory unit='MiB'>4096</memory>
	I1002 19:47:35.977565   14168 main.go:141] libmachine: (addons-355008)   <vcpu>2</vcpu>
	I1002 19:47:35.977568   14168 main.go:141] libmachine: (addons-355008)   <features>
	I1002 19:47:35.977573   14168 main.go:141] libmachine: (addons-355008)     <acpi/>
	I1002 19:47:35.977577   14168 main.go:141] libmachine: (addons-355008)     <apic/>
	I1002 19:47:35.977582   14168 main.go:141] libmachine: (addons-355008)     <pae/>
	I1002 19:47:35.977589   14168 main.go:141] libmachine: (addons-355008)   </features>
	I1002 19:47:35.977594   14168 main.go:141] libmachine: (addons-355008)   <cpu mode='host-passthrough'>
	I1002 19:47:35.977598   14168 main.go:141] libmachine: (addons-355008)   </cpu>
	I1002 19:47:35.977602   14168 main.go:141] libmachine: (addons-355008)   <os>
	I1002 19:47:35.977608   14168 main.go:141] libmachine: (addons-355008)     <type>hvm</type>
	I1002 19:47:35.977613   14168 main.go:141] libmachine: (addons-355008)     <boot dev='cdrom'/>
	I1002 19:47:35.977620   14168 main.go:141] libmachine: (addons-355008)     <boot dev='hd'/>
	I1002 19:47:35.977625   14168 main.go:141] libmachine: (addons-355008)     <bootmenu enable='no'/>
	I1002 19:47:35.977634   14168 main.go:141] libmachine: (addons-355008)   </os>
	I1002 19:47:35.977639   14168 main.go:141] libmachine: (addons-355008)   <devices>
	I1002 19:47:35.977647   14168 main.go:141] libmachine: (addons-355008)     <disk type='file' device='cdrom'>
	I1002 19:47:35.977657   14168 main.go:141] libmachine: (addons-355008)       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/boot2docker.iso'/>
	I1002 19:47:35.977662   14168 main.go:141] libmachine: (addons-355008)       <target dev='hdc' bus='scsi'/>
	I1002 19:47:35.977669   14168 main.go:141] libmachine: (addons-355008)       <readonly/>
	I1002 19:47:35.977673   14168 main.go:141] libmachine: (addons-355008)     </disk>
	I1002 19:47:35.977679   14168 main.go:141] libmachine: (addons-355008)     <disk type='file' device='disk'>
	I1002 19:47:35.977687   14168 main.go:141] libmachine: (addons-355008)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 19:47:35.977695   14168 main.go:141] libmachine: (addons-355008)       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/addons-355008.rawdisk'/>
	I1002 19:47:35.977702   14168 main.go:141] libmachine: (addons-355008)       <target dev='hda' bus='virtio'/>
	I1002 19:47:35.977707   14168 main.go:141] libmachine: (addons-355008)     </disk>
	I1002 19:47:35.977713   14168 main.go:141] libmachine: (addons-355008)     <interface type='network'>
	I1002 19:47:35.977760   14168 main.go:141] libmachine: (addons-355008)       <source network='mk-addons-355008'/>
	I1002 19:47:35.977785   14168 main.go:141] libmachine: (addons-355008)       <model type='virtio'/>
	I1002 19:47:35.977796   14168 main.go:141] libmachine: (addons-355008)     </interface>
	I1002 19:47:35.977813   14168 main.go:141] libmachine: (addons-355008)     <interface type='network'>
	I1002 19:47:35.977827   14168 main.go:141] libmachine: (addons-355008)       <source network='default'/>
	I1002 19:47:35.977840   14168 main.go:141] libmachine: (addons-355008)       <model type='virtio'/>
	I1002 19:47:35.977852   14168 main.go:141] libmachine: (addons-355008)     </interface>
	I1002 19:47:35.977863   14168 main.go:141] libmachine: (addons-355008)     <serial type='pty'>
	I1002 19:47:35.977873   14168 main.go:141] libmachine: (addons-355008)       <target port='0'/>
	I1002 19:47:35.977900   14168 main.go:141] libmachine: (addons-355008)     </serial>
	I1002 19:47:35.977912   14168 main.go:141] libmachine: (addons-355008)     <console type='pty'>
	I1002 19:47:35.977920   14168 main.go:141] libmachine: (addons-355008)       <target type='serial' port='0'/>
	I1002 19:47:35.977941   14168 main.go:141] libmachine: (addons-355008)     </console>
	I1002 19:47:35.977952   14168 main.go:141] libmachine: (addons-355008)     <rng model='virtio'>
	I1002 19:47:35.977964   14168 main.go:141] libmachine: (addons-355008)       <backend model='random'>/dev/random</backend>
	I1002 19:47:35.977976   14168 main.go:141] libmachine: (addons-355008)     </rng>
	I1002 19:47:35.977982   14168 main.go:141] libmachine: (addons-355008)   </devices>
	I1002 19:47:35.977999   14168 main.go:141] libmachine: (addons-355008) </domain>
	I1002 19:47:35.978025   14168 main.go:141] libmachine: (addons-355008) 
	I1002 19:47:35.984792   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:87:c1:06 in network default
	I1002 19:47:35.985305   14168 main.go:141] libmachine: (addons-355008) starting domain...
	I1002 19:47:35.985322   14168 main.go:141] libmachine: (addons-355008) ensuring networks are active...
	I1002 19:47:35.985329   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:35.985968   14168 main.go:141] libmachine: (addons-355008) Ensuring network default is active
	I1002 19:47:35.986308   14168 main.go:141] libmachine: (addons-355008) Ensuring network mk-addons-355008 is active
	I1002 19:47:35.986858   14168 main.go:141] libmachine: (addons-355008) getting domain XML...
	I1002 19:47:35.987739   14168 main.go:141] libmachine: (addons-355008) DBG | starting domain XML:
	I1002 19:47:35.987759   14168 main.go:141] libmachine: (addons-355008) DBG | <domain type='kvm'>
	I1002 19:47:35.987782   14168 main.go:141] libmachine: (addons-355008) DBG |   <name>addons-355008</name>
	I1002 19:47:35.987792   14168 main.go:141] libmachine: (addons-355008) DBG |   <uuid>5c060515-8a3a-4072-af30-8d4377b938cc</uuid>
	I1002 19:47:35.987803   14168 main.go:141] libmachine: (addons-355008) DBG |   <memory unit='KiB'>4194304</memory>
	I1002 19:47:35.987815   14168 main.go:141] libmachine: (addons-355008) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1002 19:47:35.987824   14168 main.go:141] libmachine: (addons-355008) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 19:47:35.987835   14168 main.go:141] libmachine: (addons-355008) DBG |   <os>
	I1002 19:47:35.987855   14168 main.go:141] libmachine: (addons-355008) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 19:47:35.987873   14168 main.go:141] libmachine: (addons-355008) DBG |     <boot dev='cdrom'/>
	I1002 19:47:35.987879   14168 main.go:141] libmachine: (addons-355008) DBG |     <boot dev='hd'/>
	I1002 19:47:35.987884   14168 main.go:141] libmachine: (addons-355008) DBG |     <bootmenu enable='no'/>
	I1002 19:47:35.987889   14168 main.go:141] libmachine: (addons-355008) DBG |   </os>
	I1002 19:47:35.987895   14168 main.go:141] libmachine: (addons-355008) DBG |   <features>
	I1002 19:47:35.987901   14168 main.go:141] libmachine: (addons-355008) DBG |     <acpi/>
	I1002 19:47:35.987908   14168 main.go:141] libmachine: (addons-355008) DBG |     <apic/>
	I1002 19:47:35.987930   14168 main.go:141] libmachine: (addons-355008) DBG |     <pae/>
	I1002 19:47:35.987946   14168 main.go:141] libmachine: (addons-355008) DBG |   </features>
	I1002 19:47:35.987954   14168 main.go:141] libmachine: (addons-355008) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 19:47:35.987964   14168 main.go:141] libmachine: (addons-355008) DBG |   <clock offset='utc'/>
	I1002 19:47:35.987972   14168 main.go:141] libmachine: (addons-355008) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 19:47:35.987980   14168 main.go:141] libmachine: (addons-355008) DBG |   <on_reboot>restart</on_reboot>
	I1002 19:47:35.987991   14168 main.go:141] libmachine: (addons-355008) DBG |   <on_crash>destroy</on_crash>
	I1002 19:47:35.987999   14168 main.go:141] libmachine: (addons-355008) DBG |   <devices>
	I1002 19:47:35.988018   14168 main.go:141] libmachine: (addons-355008) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 19:47:35.988026   14168 main.go:141] libmachine: (addons-355008) DBG |     <disk type='file' device='cdrom'>
	I1002 19:47:35.988032   14168 main.go:141] libmachine: (addons-355008) DBG |       <driver name='qemu' type='raw'/>
	I1002 19:47:35.988042   14168 main.go:141] libmachine: (addons-355008) DBG |       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/boot2docker.iso'/>
	I1002 19:47:35.988049   14168 main.go:141] libmachine: (addons-355008) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 19:47:35.988054   14168 main.go:141] libmachine: (addons-355008) DBG |       <readonly/>
	I1002 19:47:35.988060   14168 main.go:141] libmachine: (addons-355008) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 19:47:35.988066   14168 main.go:141] libmachine: (addons-355008) DBG |     </disk>
	I1002 19:47:35.988071   14168 main.go:141] libmachine: (addons-355008) DBG |     <disk type='file' device='disk'>
	I1002 19:47:35.988080   14168 main.go:141] libmachine: (addons-355008) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 19:47:35.988099   14168 main.go:141] libmachine: (addons-355008) DBG |       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/addons-355008.rawdisk'/>
	I1002 19:47:35.988118   14168 main.go:141] libmachine: (addons-355008) DBG |       <target dev='hda' bus='virtio'/>
	I1002 19:47:35.988133   14168 main.go:141] libmachine: (addons-355008) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 19:47:35.988144   14168 main.go:141] libmachine: (addons-355008) DBG |     </disk>
	I1002 19:47:35.988157   14168 main.go:141] libmachine: (addons-355008) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 19:47:35.988169   14168 main.go:141] libmachine: (addons-355008) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 19:47:35.988181   14168 main.go:141] libmachine: (addons-355008) DBG |     </controller>
	I1002 19:47:35.988196   14168 main.go:141] libmachine: (addons-355008) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 19:47:35.988209   14168 main.go:141] libmachine: (addons-355008) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 19:47:35.988221   14168 main.go:141] libmachine: (addons-355008) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 19:47:35.988233   14168 main.go:141] libmachine: (addons-355008) DBG |     </controller>
	I1002 19:47:35.988243   14168 main.go:141] libmachine: (addons-355008) DBG |     <interface type='network'>
	I1002 19:47:35.988254   14168 main.go:141] libmachine: (addons-355008) DBG |       <mac address='52:54:00:33:f0:cc'/>
	I1002 19:47:35.988262   14168 main.go:141] libmachine: (addons-355008) DBG |       <source network='mk-addons-355008'/>
	I1002 19:47:35.988277   14168 main.go:141] libmachine: (addons-355008) DBG |       <model type='virtio'/>
	I1002 19:47:35.988291   14168 main.go:141] libmachine: (addons-355008) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 19:47:35.988302   14168 main.go:141] libmachine: (addons-355008) DBG |     </interface>
	I1002 19:47:35.988307   14168 main.go:141] libmachine: (addons-355008) DBG |     <interface type='network'>
	I1002 19:47:35.988315   14168 main.go:141] libmachine: (addons-355008) DBG |       <mac address='52:54:00:87:c1:06'/>
	I1002 19:47:35.988320   14168 main.go:141] libmachine: (addons-355008) DBG |       <source network='default'/>
	I1002 19:47:35.988327   14168 main.go:141] libmachine: (addons-355008) DBG |       <model type='virtio'/>
	I1002 19:47:35.988333   14168 main.go:141] libmachine: (addons-355008) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 19:47:35.988340   14168 main.go:141] libmachine: (addons-355008) DBG |     </interface>
	I1002 19:47:35.988345   14168 main.go:141] libmachine: (addons-355008) DBG |     <serial type='pty'>
	I1002 19:47:35.988350   14168 main.go:141] libmachine: (addons-355008) DBG |       <target type='isa-serial' port='0'>
	I1002 19:47:35.988357   14168 main.go:141] libmachine: (addons-355008) DBG |         <model name='isa-serial'/>
	I1002 19:47:35.988362   14168 main.go:141] libmachine: (addons-355008) DBG |       </target>
	I1002 19:47:35.988366   14168 main.go:141] libmachine: (addons-355008) DBG |     </serial>
	I1002 19:47:35.988371   14168 main.go:141] libmachine: (addons-355008) DBG |     <console type='pty'>
	I1002 19:47:35.988376   14168 main.go:141] libmachine: (addons-355008) DBG |       <target type='serial' port='0'/>
	I1002 19:47:35.988380   14168 main.go:141] libmachine: (addons-355008) DBG |     </console>
	I1002 19:47:35.988385   14168 main.go:141] libmachine: (addons-355008) DBG |     <input type='mouse' bus='ps2'/>
	I1002 19:47:35.988391   14168 main.go:141] libmachine: (addons-355008) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 19:47:35.988397   14168 main.go:141] libmachine: (addons-355008) DBG |     <audio id='1' type='none'/>
	I1002 19:47:35.988413   14168 main.go:141] libmachine: (addons-355008) DBG |     <memballoon model='virtio'>
	I1002 19:47:35.988421   14168 main.go:141] libmachine: (addons-355008) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 19:47:35.988426   14168 main.go:141] libmachine: (addons-355008) DBG |     </memballoon>
	I1002 19:47:35.988430   14168 main.go:141] libmachine: (addons-355008) DBG |     <rng model='virtio'>
	I1002 19:47:35.988439   14168 main.go:141] libmachine: (addons-355008) DBG |       <backend model='random'>/dev/random</backend>
	I1002 19:47:35.988444   14168 main.go:141] libmachine: (addons-355008) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 19:47:35.988451   14168 main.go:141] libmachine: (addons-355008) DBG |     </rng>
	I1002 19:47:35.988455   14168 main.go:141] libmachine: (addons-355008) DBG |   </devices>
	I1002 19:47:35.988462   14168 main.go:141] libmachine: (addons-355008) DBG | </domain>
	I1002 19:47:35.988466   14168 main.go:141] libmachine: (addons-355008) DBG | 
	I1002 19:47:37.264931   14168 main.go:141] libmachine: (addons-355008) waiting for domain to start...
	I1002 19:47:37.266274   14168 main.go:141] libmachine: (addons-355008) domain is now running
	I1002 19:47:37.266301   14168 main.go:141] libmachine: (addons-355008) waiting for IP...
	I1002 19:47:37.267117   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:37.267554   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:37.267577   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:37.268340   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:37.268449   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:37.268373   14196 retry.go:31] will retry after 213.356974ms: waiting for domain to come up
	I1002 19:47:37.484087   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:37.484565   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:37.484594   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:37.484871   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:37.484897   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:37.484845   14196 retry.go:31] will retry after 295.692579ms: waiting for domain to come up
	I1002 19:47:37.782438   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:37.782945   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:37.782975   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:37.783246   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:37.783319   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:37.783261   14196 retry.go:31] will retry after 379.686669ms: waiting for domain to come up
	I1002 19:47:38.164910   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:38.165363   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:38.165408   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:38.165583   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:38.165615   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:38.165573   14196 retry.go:31] will retry after 457.431113ms: waiting for domain to come up
	I1002 19:47:38.624145   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:38.624674   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:38.624705   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:38.624956   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:38.624985   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:38.624922   14196 retry.go:31] will retry after 751.393885ms: waiting for domain to come up
	I1002 19:47:39.378499   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:39.379046   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:39.379077   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:39.379374   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:39.379396   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:39.379319   14196 retry.go:31] will retry after 607.107665ms: waiting for domain to come up
	I1002 19:47:39.988103   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:39.988519   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:39.988545   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:39.988780   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:39.988873   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:39.988776   14196 retry.go:31] will retry after 1.191870981s: waiting for domain to come up
	I1002 19:47:41.181995   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:41.182410   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:41.182433   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:41.182718   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:41.182757   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:41.182697   14196 retry.go:31] will retry after 965.196647ms: waiting for domain to come up
	I1002 19:47:42.149962   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:42.150518   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:42.150543   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:42.150879   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:42.150910   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:42.150859   14196 retry.go:31] will retry after 1.763940626s: waiting for domain to come up
	I1002 19:47:43.916950   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:43.917410   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:43.917431   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:43.917682   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:43.917740   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:43.917669   14196 retry.go:31] will retry after 1.974045912s: waiting for domain to come up
	I1002 19:47:45.894492   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:45.895074   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:45.895103   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:45.895430   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:45.895482   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:45.895400   14196 retry.go:31] will retry after 2.303864316s: waiting for domain to come up
	I1002 19:47:48.201943   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:48.202451   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:48.202477   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:48.202743   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:48.202811   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:48.202743   14196 retry.go:31] will retry after 2.561926884s: waiting for domain to come up
	I1002 19:47:50.767218   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:50.767692   14168 main.go:141] libmachine: (addons-355008) DBG | no network interface addresses found for domain addons-355008 (source=lease)
	I1002 19:47:50.767718   14168 main.go:141] libmachine: (addons-355008) DBG | trying to list again with source=arp
	I1002 19:47:50.767977   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find current IP address of domain addons-355008 in network mk-addons-355008 (interfaces detected: [])
	I1002 19:47:50.768026   14168 main.go:141] libmachine: (addons-355008) DBG | I1002 19:47:50.767975   14196 retry.go:31] will retry after 4.087401784s: waiting for domain to come up
	I1002 19:47:54.857095   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:54.857609   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has current primary IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:54.857632   14168 main.go:141] libmachine: (addons-355008) found domain IP: 192.168.39.211
	I1002 19:47:54.857701   14168 main.go:141] libmachine: (addons-355008) reserving static IP address...
	I1002 19:47:54.858044   14168 main.go:141] libmachine: (addons-355008) DBG | unable to find host DHCP lease matching {name: "addons-355008", mac: "52:54:00:33:f0:cc", ip: "192.168.39.211"} in network mk-addons-355008
	I1002 19:47:55.041438   14168 main.go:141] libmachine: (addons-355008) DBG | Getting to WaitForSSH function...
	I1002 19:47:55.041465   14168 main.go:141] libmachine: (addons-355008) reserved static IP address 192.168.39.211 for domain addons-355008
	I1002 19:47:55.041478   14168 main.go:141] libmachine: (addons-355008) waiting for SSH...
	I1002 19:47:55.044518   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.044947   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:55.044988   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.045186   14168 main.go:141] libmachine: (addons-355008) DBG | Using SSH client type: external
	I1002 19:47:55.045212   14168 main.go:141] libmachine: (addons-355008) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa (-rw-------)
	I1002 19:47:55.045257   14168 main.go:141] libmachine: (addons-355008) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 19:47:55.045275   14168 main.go:141] libmachine: (addons-355008) DBG | About to run SSH command:
	I1002 19:47:55.045296   14168 main.go:141] libmachine: (addons-355008) DBG | exit 0
	I1002 19:47:55.182703   14168 main.go:141] libmachine: (addons-355008) DBG | SSH cmd err, output: <nil>: 
	I1002 19:47:55.183007   14168 main.go:141] libmachine: (addons-355008) domain creation complete
	I1002 19:47:55.183388   14168 main.go:141] libmachine: (addons-355008) Calling .GetConfigRaw
	I1002 19:47:55.183967   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:47:55.184155   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:47:55.184312   14168 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 19:47:55.184331   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:47:55.185681   14168 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 19:47:55.185708   14168 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 19:47:55.185715   14168 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 19:47:55.185731   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:55.187916   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.188331   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:55.188378   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.188460   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:55.188629   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:55.188785   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:55.188932   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:55.189059   14168 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:55.189273   14168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1002 19:47:55.189282   14168 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 19:47:55.295305   14168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:47:55.295338   14168 main.go:141] libmachine: Detecting the provisioner...
	I1002 19:47:55.295366   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:55.298394   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.298827   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:55.298856   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.299033   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:55.299241   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:55.299386   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:55.299536   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:55.299683   14168 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:55.299946   14168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1002 19:47:55.299960   14168 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 19:47:55.408654   14168 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 19:47:55.408763   14168 main.go:141] libmachine: found compatible host: buildroot
	I1002 19:47:55.408778   14168 main.go:141] libmachine: Provisioning with buildroot...
	I1002 19:47:55.408789   14168 main.go:141] libmachine: (addons-355008) Calling .GetMachineName
	I1002 19:47:55.409053   14168 buildroot.go:166] provisioning hostname "addons-355008"
	I1002 19:47:55.409085   14168 main.go:141] libmachine: (addons-355008) Calling .GetMachineName
	I1002 19:47:55.409303   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:55.412454   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.412840   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:55.412864   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.413056   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:55.413246   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:55.413401   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:55.413572   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:55.413717   14168 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:55.413974   14168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1002 19:47:55.413987   14168 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-355008 && echo "addons-355008" | sudo tee /etc/hostname
	I1002 19:47:55.540676   14168 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-355008
	
	I1002 19:47:55.540706   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:55.543882   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.544237   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:55.544330   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.545359   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:55.545575   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:55.545774   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:55.545945   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:55.546151   14168 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:55.546367   14168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1002 19:47:55.546390   14168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-355008' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-355008/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-355008' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:47:55.665227   14168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:47:55.665258   14168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9524/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9524/.minikube}
	I1002 19:47:55.665283   14168 buildroot.go:174] setting up certificates
	I1002 19:47:55.665297   14168 provision.go:84] configureAuth start
	I1002 19:47:55.665309   14168 main.go:141] libmachine: (addons-355008) Calling .GetMachineName
	I1002 19:47:55.665640   14168 main.go:141] libmachine: (addons-355008) Calling .GetIP
	I1002 19:47:55.668536   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.668913   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:55.668941   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.669128   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:55.671348   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.671683   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:55.671718   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.671886   14168 provision.go:143] copyHostCerts
	I1002 19:47:55.671950   14168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem (1082 bytes)
	I1002 19:47:55.672060   14168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem (1123 bytes)
	I1002 19:47:55.672118   14168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem (1679 bytes)
	I1002 19:47:55.672165   14168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem org=jenkins.addons-355008 san=[127.0.0.1 192.168.39.211 addons-355008 localhost minikube]
	I1002 19:47:55.859590   14168 provision.go:177] copyRemoteCerts
	I1002 19:47:55.859648   14168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:47:55.859670   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:55.862517   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.862895   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:55.862919   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:55.863166   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:55.863386   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:55.863533   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:55.863645   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:47:55.948693   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 19:47:55.980333   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 19:47:56.011806   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 19:47:56.042583   14168 provision.go:87] duration metric: took 377.274009ms to configureAuth
	I1002 19:47:56.042613   14168 buildroot.go:189] setting minikube options for container-runtime
	I1002 19:47:56.042834   14168 config.go:182] Loaded profile config "addons-355008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 19:47:56.042976   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:56.045831   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.046200   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:56.046233   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.046410   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:56.046590   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:56.046751   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:56.046974   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:56.047185   14168 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:56.047377   14168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1002 19:47:56.047390   14168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 19:47:56.306769   14168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 19:47:56.306800   14168 main.go:141] libmachine: Checking connection to Docker...
	I1002 19:47:56.306810   14168 main.go:141] libmachine: (addons-355008) Calling .GetURL
	I1002 19:47:56.308121   14168 main.go:141] libmachine: (addons-355008) DBG | using libvirt version 8000000
	I1002 19:47:56.310824   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.311210   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:56.311246   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.311406   14168 main.go:141] libmachine: Docker is up and running!
	I1002 19:47:56.311421   14168 main.go:141] libmachine: Reticulating splines...
	I1002 19:47:56.311430   14168 client.go:171] duration metric: took 22.171206962s to LocalClient.Create
	I1002 19:47:56.311460   14168 start.go:168] duration metric: took 22.171282179s to libmachine.API.Create "addons-355008"
	I1002 19:47:56.311474   14168 start.go:294] postStartSetup for "addons-355008" (driver="kvm2")
	I1002 19:47:56.311486   14168 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:47:56.311523   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:47:56.311772   14168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:47:56.311798   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:56.314219   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.314611   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:56.314632   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.314847   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:56.315069   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:56.315258   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:56.315417   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:47:56.399804   14168 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:47:56.404824   14168 info.go:137] Remote host: Buildroot 2025.02
	I1002 19:47:56.404858   14168 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/addons for local assets ...
	I1002 19:47:56.404950   14168 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/files for local assets ...
	I1002 19:47:56.404994   14168 start.go:297] duration metric: took 93.513846ms for postStartSetup
	I1002 19:47:56.405031   14168 main.go:141] libmachine: (addons-355008) Calling .GetConfigRaw
	I1002 19:47:56.405656   14168 main.go:141] libmachine: (addons-355008) Calling .GetIP
	I1002 19:47:56.408282   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.408574   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:56.408599   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.408884   14168 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/config.json ...
	I1002 19:47:56.409065   14168 start.go:129] duration metric: took 22.284989939s to createHost
	I1002 19:47:56.409088   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:56.411435   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.411761   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:56.411787   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.411998   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:56.412199   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:56.412361   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:56.412512   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:56.412649   14168 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:56.412863   14168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1002 19:47:56.412876   14168 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 19:47:56.525113   14168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759434476.486591464
	
	I1002 19:47:56.525138   14168 fix.go:217] guest clock: 1759434476.486591464
	I1002 19:47:56.525145   14168 fix.go:230] Guest: 2025-10-02 19:47:56.486591464 +0000 UTC Remote: 2025-10-02 19:47:56.409077439 +0000 UTC m=+22.395854416 (delta=77.514025ms)
	I1002 19:47:56.525187   14168 fix.go:201] guest clock delta is within tolerance: 77.514025ms
	I1002 19:47:56.525194   14168 start.go:84] releasing machines lock for "addons-355008", held for 22.401188477s
	I1002 19:47:56.525222   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:47:56.525466   14168 main.go:141] libmachine: (addons-355008) Calling .GetIP
	I1002 19:47:56.528486   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.529141   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:47:56.528858   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:56.529419   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.529739   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:47:56.529916   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:47:56.530017   14168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:47:56.530060   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:56.530181   14168 ssh_runner.go:195] Run: cat /version.json
	I1002 19:47:56.530205   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:47:56.533275   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.533528   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.533655   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:56.533683   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.533852   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:56.533949   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:56.533966   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:56.534040   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:56.534180   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:47:56.534230   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:56.534309   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:47:56.534381   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:47:56.534428   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:47:56.534550   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:47:56.650051   14168 ssh_runner.go:195] Run: systemctl --version
	I1002 19:47:56.657345   14168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 19:47:56.817531   14168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 19:47:56.824770   14168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:47:56.824851   14168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 19:47:56.846521   14168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 19:47:56.846544   14168 start.go:496] detecting cgroup driver to use...
	I1002 19:47:56.846616   14168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:47:56.866940   14168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:47:56.884846   14168 docker.go:218] disabling cri-docker service (if available) ...
	I1002 19:47:56.884908   14168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 19:47:56.903399   14168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 19:47:56.919768   14168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 19:47:57.062391   14168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 19:47:57.273788   14168 docker.go:234] disabling docker service ...
	I1002 19:47:57.273854   14168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 19:47:57.290383   14168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 19:47:57.305365   14168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 19:47:57.466463   14168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 19:47:57.619167   14168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 19:47:57.637638   14168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:47:57.662307   14168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 19:47:57.662368   14168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:57.675884   14168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 19:47:57.675940   14168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:57.689480   14168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:57.703178   14168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:57.716833   14168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:47:57.731342   14168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:57.744500   14168 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:57.766899   14168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:57.780096   14168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:47:57.791283   14168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 19:47:57.791349   14168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 19:47:57.812536   14168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:47:57.825400   14168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:47:57.965771   14168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 19:47:58.079312   14168 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 19:47:58.079399   14168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 19:47:58.085194   14168 start.go:564] Will wait 60s for crictl version
	I1002 19:47:58.085264   14168 ssh_runner.go:195] Run: which crictl
	I1002 19:47:58.089546   14168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 19:47:58.131692   14168 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 19:47:58.131839   14168 ssh_runner.go:195] Run: crio --version
	I1002 19:47:58.163717   14168 ssh_runner.go:195] Run: crio --version
	I1002 19:47:58.200797   14168 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 19:47:58.201920   14168 main.go:141] libmachine: (addons-355008) Calling .GetIP
	I1002 19:47:58.205035   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:58.205401   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:47:58.205436   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:47:58.205646   14168 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 19:47:58.210568   14168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:47:58.225831   14168 kubeadm.go:883] updating cluster {Name:addons-355008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-355008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 19:47:58.225940   14168 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 19:47:58.226018   14168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 19:47:58.261525   14168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1002 19:47:58.261609   14168 ssh_runner.go:195] Run: which lz4
	I1002 19:47:58.266327   14168 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 19:47:58.271669   14168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 19:47:58.271704   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1002 19:47:59.829331   14168 crio.go:462] duration metric: took 1.563032374s to copy over tarball
	I1002 19:47:59.829401   14168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 19:48:01.516542   14168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.687109994s)
	I1002 19:48:01.516573   14168 crio.go:469] duration metric: took 1.68720858s to extract the tarball
	I1002 19:48:01.516581   14168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 19:48:01.562217   14168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 19:48:01.617132   14168 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 19:48:01.617156   14168 cache_images.go:85] Images are preloaded, skipping loading
	I1002 19:48:01.617163   14168 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.34.1 crio true true} ...
	I1002 19:48:01.617263   14168 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-355008 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-355008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 19:48:01.617327   14168 ssh_runner.go:195] Run: crio config
	I1002 19:48:01.670198   14168 cni.go:84] Creating CNI manager for ""
	I1002 19:48:01.670230   14168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 19:48:01.670250   14168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 19:48:01.670273   14168 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-355008 NodeName:addons-355008 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 19:48:01.670374   14168 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-355008"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 19:48:01.670434   14168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 19:48:01.684626   14168 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 19:48:01.684696   14168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 19:48:01.698019   14168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1002 19:48:01.723196   14168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 19:48:01.748128   14168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1002 19:48:01.772810   14168 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1002 19:48:01.777618   14168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:48:01.794251   14168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:48:01.950644   14168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 19:48:01.982448   14168 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008 for IP: 192.168.39.211
	I1002 19:48:01.982476   14168 certs.go:195] generating shared ca certs ...
	I1002 19:48:01.982493   14168 certs.go:227] acquiring lock for ca certs: {Name:mk36b72fb138c08da6f63c209f5b6ddd4af4f5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:01.982641   14168 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key
	I1002 19:48:02.150508   14168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt ...
	I1002 19:48:02.150543   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt: {Name:mk3e87a6003633370bcb7d077ba25700d747e199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.150735   14168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key ...
	I1002 19:48:02.150748   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key: {Name:mk972f431e303216010859532619ca439a0fd889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.150833   14168 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key
	I1002 19:48:02.421382   14168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.crt ...
	I1002 19:48:02.421418   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.crt: {Name:mk07dc4bcac6b66a2e31661d0f82924b39201e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.421592   14168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key ...
	I1002 19:48:02.421605   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key: {Name:mk0ecf70ac41278d7264facd93842d9fb9da940e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.421677   14168 certs.go:257] generating profile certs ...
	I1002 19:48:02.421740   14168 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.key
	I1002 19:48:02.421768   14168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt with IP's: []
	I1002 19:48:02.473010   14168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt ...
	I1002 19:48:02.473043   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: {Name:mk26c511952ab0515dbe494195580471bbcce039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.473235   14168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.key ...
	I1002 19:48:02.473251   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.key: {Name:mk931c3edbfa21108afc8f69063c7fb20ca91394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.473326   14168 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.key.746cd62b
	I1002 19:48:02.473351   14168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.crt.746cd62b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.211]
	I1002 19:48:02.538057   14168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.crt.746cd62b ...
	I1002 19:48:02.538104   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.crt.746cd62b: {Name:mk082b1dc0c89b94765623d443db018072e3e55e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.538291   14168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.key.746cd62b ...
	I1002 19:48:02.538311   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.key.746cd62b: {Name:mkd413036d001639628ede6339341626840e9b24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.538418   14168 certs.go:382] copying /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.crt.746cd62b -> /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.crt
	I1002 19:48:02.538515   14168 certs.go:386] copying /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.key.746cd62b -> /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.key
	I1002 19:48:02.538568   14168 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/proxy-client.key
	I1002 19:48:02.538591   14168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/proxy-client.crt with IP's: []
	I1002 19:48:02.725664   14168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/proxy-client.crt ...
	I1002 19:48:02.725699   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/proxy-client.crt: {Name:mk96aece3b17816b1d69c1cc303b672139632031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.725868   14168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/proxy-client.key ...
	I1002 19:48:02.725880   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/proxy-client.key: {Name:mk3d67998f299ae344f7501a2eb87d5950be3303 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:02.726578   14168 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 19:48:02.726617   14168 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem (1082 bytes)
	I1002 19:48:02.726637   14168 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem (1123 bytes)
	I1002 19:48:02.726654   14168 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem (1679 bytes)
	I1002 19:48:02.727740   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 19:48:02.769293   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 19:48:02.815264   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 19:48:02.851582   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 19:48:02.888995   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 19:48:02.923157   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 19:48:02.957874   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 19:48:02.992817   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 19:48:03.030872   14168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 19:48:03.069694   14168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 19:48:03.094025   14168 ssh_runner.go:195] Run: openssl version
	I1002 19:48:03.101590   14168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 19:48:03.118948   14168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:48:03.125317   14168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:48 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:48:03.125382   14168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:48:03.133798   14168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 19:48:03.149999   14168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 19:48:03.155653   14168 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 19:48:03.155712   14168 kubeadm.go:400] StartCluster: {Name:addons-355008 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-355008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:48:03.155791   14168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 19:48:03.155843   14168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 19:48:03.201248   14168 cri.go:89] found id: ""
	I1002 19:48:03.201317   14168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 19:48:03.214772   14168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 19:48:03.229003   14168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 19:48:03.242411   14168 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 19:48:03.242432   14168 kubeadm.go:157] found existing configuration files:
	
	I1002 19:48:03.242516   14168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 19:48:03.254599   14168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 19:48:03.254687   14168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 19:48:03.269739   14168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 19:48:03.282528   14168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 19:48:03.282600   14168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 19:48:03.296130   14168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 19:48:03.308839   14168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 19:48:03.308913   14168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 19:48:03.322312   14168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 19:48:03.335889   14168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 19:48:03.335958   14168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 19:48:03.351310   14168 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 19:48:03.410894   14168 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 19:48:03.410996   14168 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 19:48:03.535153   14168 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 19:48:03.535301   14168 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 19:48:03.535423   14168 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 19:48:03.549452   14168 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 19:48:03.719243   14168 out.go:252]   - Generating certificates and keys ...
	I1002 19:48:03.719370   14168 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 19:48:03.719456   14168 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 19:48:03.791696   14168 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 19:48:03.938487   14168 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 19:48:04.041330   14168 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 19:48:04.110327   14168 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 19:48:04.573406   14168 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 19:48:04.573524   14168 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-355008 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I1002 19:48:04.900023   14168 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 19:48:04.900187   14168 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-355008 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I1002 19:48:05.127126   14168 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 19:48:05.507142   14168 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 19:48:05.546823   14168 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 19:48:05.547245   14168 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 19:48:05.976157   14168 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 19:48:06.480381   14168 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 19:48:06.513813   14168 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 19:48:06.698026   14168 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 19:48:06.759545   14168 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 19:48:06.759674   14168 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 19:48:06.761335   14168 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 19:48:06.806399   14168 out.go:252]   - Booting up control plane ...
	I1002 19:48:06.806537   14168 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 19:48:06.806656   14168 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 19:48:06.806758   14168 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 19:48:06.806904   14168 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 19:48:06.807054   14168 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 19:48:06.807230   14168 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 19:48:06.807378   14168 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 19:48:06.807442   14168 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 19:48:06.981467   14168 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 19:48:06.981654   14168 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 19:48:07.981205   14168 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001459684s
	I1002 19:48:07.984070   14168 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 19:48:07.984197   14168 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.211:8443/livez
	I1002 19:48:07.984398   14168 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 19:48:07.984525   14168 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 19:48:10.511337   14168 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.529125688s
	I1002 19:48:11.810262   14168 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.829807654s
	I1002 19:48:13.985325   14168 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.005973831s
	I1002 19:48:14.000237   14168 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 19:48:14.021966   14168 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 19:48:14.034709   14168 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 19:48:14.034961   14168 kubeadm.go:318] [mark-control-plane] Marking the node addons-355008 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 19:48:14.047797   14168 kubeadm.go:318] [bootstrap-token] Using token: 0l4ds5.snq0x6jb7vaojmjn
	I1002 19:48:14.049036   14168 out.go:252]   - Configuring RBAC rules ...
	I1002 19:48:14.049205   14168 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 19:48:14.064237   14168 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 19:48:14.072406   14168 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 19:48:14.077302   14168 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 19:48:14.080857   14168 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 19:48:14.086737   14168 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 19:48:14.393494   14168 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 19:48:14.858634   14168 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 19:48:15.390540   14168 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 19:48:15.391587   14168 kubeadm.go:318] 
	I1002 19:48:15.391705   14168 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 19:48:15.391742   14168 kubeadm.go:318] 
	I1002 19:48:15.391853   14168 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 19:48:15.391863   14168 kubeadm.go:318] 
	I1002 19:48:15.391899   14168 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 19:48:15.391980   14168 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 19:48:15.392061   14168 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 19:48:15.392069   14168 kubeadm.go:318] 
	I1002 19:48:15.392147   14168 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 19:48:15.392156   14168 kubeadm.go:318] 
	I1002 19:48:15.392233   14168 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 19:48:15.392240   14168 kubeadm.go:318] 
	I1002 19:48:15.392328   14168 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 19:48:15.392402   14168 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 19:48:15.392465   14168 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 19:48:15.392471   14168 kubeadm.go:318] 
	I1002 19:48:15.392564   14168 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 19:48:15.392670   14168 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 19:48:15.392680   14168 kubeadm.go:318] 
	I1002 19:48:15.392815   14168 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 0l4ds5.snq0x6jb7vaojmjn \
	I1002 19:48:15.392957   14168 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:715dc31514dbd4ca9540e0ef9ef3a08fa99ef6c5f537e64ea66c6086a5fa889f \
	I1002 19:48:15.392996   14168 kubeadm.go:318] 	--control-plane 
	I1002 19:48:15.393004   14168 kubeadm.go:318] 
	I1002 19:48:15.393103   14168 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 19:48:15.393113   14168 kubeadm.go:318] 
	I1002 19:48:15.393227   14168 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 0l4ds5.snq0x6jb7vaojmjn \
	I1002 19:48:15.393341   14168 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:715dc31514dbd4ca9540e0ef9ef3a08fa99ef6c5f537e64ea66c6086a5fa889f 
	I1002 19:48:15.395216   14168 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 19:48:15.395253   14168 cni.go:84] Creating CNI manager for ""
	I1002 19:48:15.395264   14168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 19:48:15.397564   14168 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 19:48:15.398754   14168 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 19:48:15.412534   14168 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 19:48:15.437104   14168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 19:48:15.437161   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:15.437182   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-355008 minikube.k8s.io/updated_at=2025_10_02T19_48_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b minikube.k8s.io/name=addons-355008 minikube.k8s.io/primary=true
	I1002 19:48:15.587046   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:15.597450   14168 ops.go:34] apiserver oom_adj: -16
	I1002 19:48:16.087226   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:16.588000   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:17.087945   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:17.587837   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:18.087316   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:18.587124   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:19.088039   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:19.587908   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:20.087408   14168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:48:20.164554   14168 kubeadm.go:1113] duration metric: took 4.727455035s to wait for elevateKubeSystemPrivileges
	I1002 19:48:20.164588   14168 kubeadm.go:402] duration metric: took 17.008880124s to StartCluster
	I1002 19:48:20.164605   14168 settings.go:142] acquiring lock: {Name:mk6a3acbc81c910cfbdc018b811be13c1e438c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:20.164757   14168 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 19:48:20.165068   14168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/kubeconfig: {Name:mk0c75eb22a83f2f7ea4f564360059d4e6d21b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:48:20.165257   14168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 19:48:20.165270   14168 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 19:48:20.165328   14168 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 19:48:20.165441   14168 addons.go:69] Setting yakd=true in profile "addons-355008"
	I1002 19:48:20.165454   14168 addons.go:69] Setting ingress=true in profile "addons-355008"
	I1002 19:48:20.165469   14168 addons.go:238] Setting addon yakd=true in "addons-355008"
	I1002 19:48:20.165480   14168 addons.go:238] Setting addon ingress=true in "addons-355008"
	I1002 19:48:20.165491   14168 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-355008"
	I1002 19:48:20.165501   14168 config.go:182] Loaded profile config "addons-355008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 19:48:20.165512   14168 addons.go:69] Setting inspektor-gadget=true in profile "addons-355008"
	I1002 19:48:20.165520   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.165524   14168 addons.go:238] Setting addon inspektor-gadget=true in "addons-355008"
	I1002 19:48:20.165540   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.165507   14168 addons.go:69] Setting ingress-dns=true in profile "addons-355008"
	I1002 19:48:20.165549   14168 addons.go:69] Setting cloud-spanner=true in profile "addons-355008"
	I1002 19:48:20.165547   14168 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-355008"
	I1002 19:48:20.165558   14168 addons.go:238] Setting addon ingress-dns=true in "addons-355008"
	I1002 19:48:20.165560   14168 addons.go:238] Setting addon cloud-spanner=true in "addons-355008"
	I1002 19:48:20.165572   14168 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-355008"
	I1002 19:48:20.165581   14168 addons.go:69] Setting registry-creds=true in profile "addons-355008"
	I1002 19:48:20.165586   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.165588   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.165595   14168 addons.go:238] Setting addon registry-creds=true in "addons-355008"
	I1002 19:48:20.165604   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.165653   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.165931   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.165960   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.165990   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.165998   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.166005   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.166019   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.166020   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.166053   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.166079   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.166086   14168 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-355008"
	I1002 19:48:20.166088   14168 addons.go:69] Setting default-storageclass=true in profile "addons-355008"
	I1002 19:48:20.166103   14168 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-355008"
	I1002 19:48:20.166107   14168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-355008"
	I1002 19:48:20.166110   14168 addons.go:69] Setting volcano=true in profile "addons-355008"
	I1002 19:48:20.166115   14168 addons.go:69] Setting gcp-auth=true in profile "addons-355008"
	I1002 19:48:20.166119   14168 addons.go:238] Setting addon volcano=true in "addons-355008"
	I1002 19:48:20.166125   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.166126   14168 addons.go:69] Setting volumesnapshots=true in profile "addons-355008"
	I1002 19:48:20.166131   14168 mustload.go:65] Loading cluster: addons-355008
	I1002 19:48:20.165543   14168 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-355008"
	I1002 19:48:20.166149   14168 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-355008"
	I1002 19:48:20.165502   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.166161   14168 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-355008"
	I1002 19:48:20.166167   14168 addons.go:69] Setting metrics-server=true in profile "addons-355008"
	I1002 19:48:20.166176   14168 addons.go:238] Setting addon metrics-server=true in "addons-355008"
	I1002 19:48:20.166080   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.166175   14168 addons.go:69] Setting registry=true in profile "addons-355008"
	I1002 19:48:20.166203   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.166215   14168 addons.go:238] Setting addon registry=true in "addons-355008"
	I1002 19:48:20.166138   14168 addons.go:238] Setting addon volumesnapshots=true in "addons-355008"
	I1002 19:48:20.166081   14168 addons.go:69] Setting storage-provisioner=true in profile "addons-355008"
	I1002 19:48:20.166343   14168 addons.go:238] Setting addon storage-provisioner=true in "addons-355008"
	I1002 19:48:20.166368   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.166374   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.166480   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.166498   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.166651   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.166685   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.166736   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.166754   14168 config.go:182] Loaded profile config "addons-355008": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 19:48:20.166817   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.166841   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.167030   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.167056   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.167077   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.167093   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.167120   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.167124   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.167147   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.167147   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.167210   14168 out.go:179] * Verifying Kubernetes components...
	I1002 19:48:20.167517   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.167546   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.167566   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.167884   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.168423   14168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:48:20.175220   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.175251   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.179591   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.179632   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.180269   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.180302   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.182931   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.183843   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.203540   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40263
	I1002 19:48:20.205390   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I1002 19:48:20.208452   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I1002 19:48:20.209066   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.209609   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.209619   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.210107   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.210222   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1002 19:48:20.210620   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.210834   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.210847   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.211346   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.211360   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.211422   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.211787   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.212388   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.212421   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.213831   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.213836   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.213836   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I1002 19:48:20.213920   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.214528   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.214533   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42003
	I1002 19:48:20.214559   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.215476   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.215574   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.215589   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.218762   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.218914   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.218926   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.218986   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36943
	I1002 19:48:20.218998   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38119
	I1002 19:48:20.219004   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43605
	I1002 19:48:20.219517   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.219533   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.220069   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.220116   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.220169   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.220201   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.220372   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.220846   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.220862   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.221172   14168 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-355008"
	I1002 19:48:20.221287   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.221325   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.221488   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.222593   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.222631   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.222825   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.223643   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.224126   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.224141   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.224376   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.224392   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.224504   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.224827   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.225298   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.225367   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.226000   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.226027   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.226217   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I1002 19:48:20.227447   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.227580   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.227594   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.227891   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.227906   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.228316   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I1002 19:48:20.228894   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.229261   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.229336   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.229595   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.229601   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I1002 19:48:20.229925   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.230104   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.230506   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.230532   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.230583   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.230600   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.231209   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.231273   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.231789   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.232328   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.232393   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.232500   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.232537   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39207
	I1002 19:48:20.233813   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.234261   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.234297   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.242951   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.243019   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.252866   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I1002 19:48:20.252881   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I1002 19:48:20.252889   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
	I1002 19:48:20.252876   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I1002 19:48:20.253425   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.253526   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.253963   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.253977   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.254076   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.254092   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.254228   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.254413   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.254848   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.254848   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40889
	I1002 19:48:20.254894   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.255017   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.255051   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.255126   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.255147   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.255327   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.255482   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.255609   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.256048   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.256095   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.256155   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.256772   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.257538   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.257565   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.257807   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.257880   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.257903   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.258257   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.258271   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.258612   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.258669   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.259210   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.259255   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.259348   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.259364   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.260059   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I1002 19:48:20.260792   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43161
	I1002 19:48:20.261302   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.261505   14168 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 19:48:20.262047   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.262508   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.262526   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.262552   14168 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 19:48:20.262574   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 19:48:20.262591   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.262685   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I1002 19:48:20.262859   14168 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 19:48:20.263006   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.263192   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.263968   14168 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 19:48:20.264005   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 19:48:20.264036   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.271097   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.271742   14168 addons.go:238] Setting addon default-storageclass=true in "addons-355008"
	I1002 19:48:20.271786   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:20.272174   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.272200   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.272466   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.272564   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44561
	I1002 19:48:20.272669   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.272686   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.273028   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.273052   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.273571   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.274002   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.274249   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.274665   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.274734   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.274905   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.274951   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.277872   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.277963   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.277973   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.277993   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.278031   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.278043   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.278063   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.278198   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.278944   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.278944   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.279118   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.279528   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.281576   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.281973   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37997
	I1002 19:48:20.282301   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40815
	I1002 19:48:20.282493   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I1002 19:48:20.285209   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37099
	I1002 19:48:20.285325   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I1002 19:48:20.285437   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.285450   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.285450   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.285526   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.285529   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.285931   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.286017   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.286981   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.286435   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.286837   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.286991   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.287451   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.287548   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.287567   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.287635   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.287649   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.288314   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.288387   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.288807   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.288862   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.289159   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.289481   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.289501   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.289870   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.289884   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.290471   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.290606   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.290824   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.291626   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.292244   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.294681   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34757
	I1002 19:48:20.296419   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.299291   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I1002 19:48:20.299322   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.299327   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.299370   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.299298   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.299442   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39749
	I1002 19:48:20.299504   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.299524   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.300106   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.300273   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.300653   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.300711   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.301348   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.301890   14168 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 19:48:20.302092   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.302184   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.301706   14168 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 19:48:20.302676   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.302790   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.303521   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.304153   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.304620   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.304642   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.304352   14168 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 19:48:20.304395   14168 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 19:48:20.304952   14168 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 19:48:20.305617   14168 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 19:48:20.305644   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.304971   14168 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 19:48:20.305878   14168 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 19:48:20.305894   14168 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 19:48:20.306319   14168 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 19:48:20.306339   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.306571   14168 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 19:48:20.306589   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 19:48:20.306604   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.305914   14168 out.go:179]   - Using image docker.io/busybox:stable
	I1002 19:48:20.307053   14168 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 19:48:20.307070   14168 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 19:48:20.307086   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.306064   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.306115   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33979
	I1002 19:48:20.307475   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.307525   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I1002 19:48:20.308529   14168 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 19:48:20.308586   14168 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 19:48:20.308853   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.308978   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42219
	I1002 19:48:20.309375   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.309633   14168 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 19:48:20.309649   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 19:48:20.309656   14168 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 19:48:20.309665   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.309802   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.309814   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.309830   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.309802   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.309876   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.310294   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.310567   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.310791   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.311328   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.311350   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.311495   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.311601   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I1002 19:48:20.311874   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.311934   14168 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 19:48:20.312067   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.312524   14168 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 19:48:20.312590   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 19:48:20.312620   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.312789   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.313467   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.313516   14168 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 19:48:20.313670   14168 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 19:48:20.313686   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 19:48:20.313702   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.314122   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.314294   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.314937   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.315180   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33899
	I1002 19:48:20.315394   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.315737   14168 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 19:48:20.316232   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.316902   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.316922   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.317451   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.317702   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.317855   14168 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 19:48:20.318045   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.318764   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.318829   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:20.318964   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:20.319812   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.319841   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.319979   14168 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 19:48:20.319980   14168 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:48:20.320807   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.321033   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.321047   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.321056   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.321062   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.321443   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.321511   14168 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:48:20.321558   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 19:48:20.321589   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.321522   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.322262   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.321907   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.322456   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.322489   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.322555   14168 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 19:48:20.322668   14168 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 19:48:20.323023   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.323029   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.323346   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:20.323357   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:20.323415   14168 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 19:48:20.323445   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 19:48:20.323461   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.323483   14168 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 19:48:20.323717   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.323624   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:20.323874   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.323644   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:20.323923   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:20.323932   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:20.323938   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:20.323664   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.324227   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:20.324241   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:20.324441   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.324470   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	W1002 19:48:20.324473   14168 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 19:48:20.324507   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.324869   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.324948   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.325003   14168 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 19:48:20.325065   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 19:48:20.325087   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.325261   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.325307   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.325358   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.325375   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.325474   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.325659   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.325881   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.325935   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.325976   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.326050   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.326048   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.326106   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.326144   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.326362   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.326494   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.327214   14168 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 19:48:20.328094   14168 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 19:48:20.328232   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.328205   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.328274   14168 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 19:48:20.328284   14168 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 19:48:20.328298   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.328753   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.328875   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.328906   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.328286   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.329172   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.329366   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.329556   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.329811   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.329935   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.330153   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.330359   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.330534   14168 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 19:48:20.330568   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.330704   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.331031   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.331257   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.331492   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.331678   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.332011   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.332779   14168 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 19:48:20.333669   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.333770   14168 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 19:48:20.333784   14168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 19:48:20.333800   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.333947   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.334174   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.334295   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.334304   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.334438   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.334496   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.334871   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.335035   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.335058   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.335171   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.335182   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.335206   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.335277   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.335284   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.335444   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.335541   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.335462   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.335702   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.335889   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.336046   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.338021   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.338574   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.338613   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.338852   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.339049   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.339243   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.339387   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.340708   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33579
	I1002 19:48:20.341093   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:20.341544   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:20.341570   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:20.341915   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:20.342110   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:20.343916   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:20.344114   14168 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 19:48:20.344127   14168 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 19:48:20.344140   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:20.348129   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.348700   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:20.348735   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:20.348942   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:20.349134   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:20.349272   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:20.349415   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:20.860062   14168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 19:48:20.860084   14168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 19:48:21.178433   14168 node_ready.go:35] waiting up to 6m0s for node "addons-355008" to be "Ready" ...
	I1002 19:48:21.194660   14168 node_ready.go:49] node "addons-355008" is "Ready"
	I1002 19:48:21.194706   14168 node_ready.go:38] duration metric: took 16.239756ms for node "addons-355008" to be "Ready" ...
	I1002 19:48:21.194748   14168 api_server.go:52] waiting for apiserver process to appear ...
	I1002 19:48:21.194815   14168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:48:21.382026   14168 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 19:48:21.382058   14168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 19:48:21.432618   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 19:48:21.462539   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 19:48:21.475958   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 19:48:21.527878   14168 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 19:48:21.527908   14168 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 19:48:21.556262   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 19:48:21.632594   14168 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 19:48:21.632630   14168 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 19:48:21.709485   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:48:21.748245   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 19:48:21.748551   14168 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 19:48:21.748571   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 19:48:21.824455   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 19:48:21.827557   14168 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 19:48:21.827573   14168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 19:48:21.848868   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 19:48:21.855590   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 19:48:21.880514   14168 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:21.880541   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 19:48:22.167767   14168 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 19:48:22.167797   14168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 19:48:22.184918   14168 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 19:48:22.184942   14168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 19:48:22.201540   14168 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 19:48:22.201569   14168 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 19:48:22.212273   14168 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 19:48:22.212299   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 19:48:22.378004   14168 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 19:48:22.378031   14168 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 19:48:22.403562   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:22.582775   14168 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 19:48:22.582803   14168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 19:48:22.585307   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 19:48:22.585786   14168 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 19:48:22.585812   14168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 19:48:22.608192   14168 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 19:48:22.608222   14168 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 19:48:22.726915   14168 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 19:48:22.726947   14168 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 19:48:22.926327   14168 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 19:48:22.926349   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 19:48:22.981468   14168 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 19:48:22.981496   14168 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 19:48:23.027319   14168 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 19:48:23.027351   14168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 19:48:23.162354   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 19:48:23.329135   14168 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 19:48:23.329158   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 19:48:23.330042   14168 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 19:48:23.330058   14168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 19:48:23.352617   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 19:48:23.657125   14168 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 19:48:23.657147   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 19:48:23.791439   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 19:48:23.942107   14168 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.081989955s)
	I1002 19:48:23.942147   14168 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.747314003s)
	I1002 19:48:23.942153   14168 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 19:48:23.942168   14168 api_server.go:72] duration metric: took 3.776877012s to wait for apiserver process to appear ...
	I1002 19:48:23.942175   14168 api_server.go:88] waiting for apiserver healthz status ...
	I1002 19:48:23.942203   14168 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1002 19:48:23.993691   14168 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1002 19:48:23.999152   14168 api_server.go:141] control plane version: v1.34.1
	I1002 19:48:23.999180   14168 api_server.go:131] duration metric: took 56.992107ms to wait for apiserver health ...
	I1002 19:48:23.999191   14168 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 19:48:24.073701   14168 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 19:48:24.073737   14168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 19:48:24.098329   14168 system_pods.go:59] 10 kube-system pods found
	I1002 19:48:24.098363   14168 system_pods.go:61] "amd-gpu-device-plugin-jmpmw" [d71dc057-594c-482e-9d24-56aa5d3609e8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 19:48:24.098371   14168 system_pods.go:61] "coredns-66bc5c9577-554hr" [284d6756-53c9-4606-b85a-9a9a034a7f4f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:48:24.098378   14168 system_pods.go:61] "coredns-66bc5c9577-74nrg" [448e2134-009a-4ade-8961-49e342460728] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:48:24.098381   14168 system_pods.go:61] "etcd-addons-355008" [4807431e-f4e3-44ff-a7b6-8784ac800283] Running
	I1002 19:48:24.098386   14168 system_pods.go:61] "kube-apiserver-addons-355008" [5ee4c195-b49f-42b1-9fc0-cb3deaf0937f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 19:48:24.098390   14168 system_pods.go:61] "kube-controller-manager-addons-355008" [bc1727e5-4a5d-4157-bddd-96a6f25fd855] Running
	I1002 19:48:24.098393   14168 system_pods.go:61] "kube-proxy-r78bp" [c2dc40f2-56d6-47ef-a820-5576f28a1c5c] Running
	I1002 19:48:24.098398   14168 system_pods.go:61] "kube-scheduler-addons-355008" [c3da39bd-ef8d-42c6-a614-49e1e3c1a9e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 19:48:24.098401   14168 system_pods.go:61] "nvidia-device-plugin-daemonset-74jk8" [1ee77706-ccb3-4b2a-a745-fb66b3b18f87] Pending
	I1002 19:48:24.098406   14168 system_pods.go:61] "registry-creds-764b6fb674-twppm" [74ef19e4-c758-408f-b5f2-9ce052284482] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 19:48:24.098412   14168 system_pods.go:74] duration metric: took 99.214702ms to wait for pod list to return data ...
	I1002 19:48:24.098420   14168 default_sa.go:34] waiting for default service account to be created ...
	I1002 19:48:24.119380   14168 default_sa.go:45] found service account: "default"
	I1002 19:48:24.119403   14168 default_sa.go:55] duration metric: took 20.97765ms for default service account to be created ...
	I1002 19:48:24.119412   14168 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 19:48:24.137136   14168 system_pods.go:86] 10 kube-system pods found
	I1002 19:48:24.137166   14168 system_pods.go:89] "amd-gpu-device-plugin-jmpmw" [d71dc057-594c-482e-9d24-56aa5d3609e8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1002 19:48:24.137172   14168 system_pods.go:89] "coredns-66bc5c9577-554hr" [284d6756-53c9-4606-b85a-9a9a034a7f4f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:48:24.137194   14168 system_pods.go:89] "coredns-66bc5c9577-74nrg" [448e2134-009a-4ade-8961-49e342460728] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 19:48:24.137207   14168 system_pods.go:89] "etcd-addons-355008" [4807431e-f4e3-44ff-a7b6-8784ac800283] Running
	I1002 19:48:24.137218   14168 system_pods.go:89] "kube-apiserver-addons-355008" [5ee4c195-b49f-42b1-9fc0-cb3deaf0937f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 19:48:24.137228   14168 system_pods.go:89] "kube-controller-manager-addons-355008" [bc1727e5-4a5d-4157-bddd-96a6f25fd855] Running
	I1002 19:48:24.137234   14168 system_pods.go:89] "kube-proxy-r78bp" [c2dc40f2-56d6-47ef-a820-5576f28a1c5c] Running
	I1002 19:48:24.137243   14168 system_pods.go:89] "kube-scheduler-addons-355008" [c3da39bd-ef8d-42c6-a614-49e1e3c1a9e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 19:48:24.137253   14168 system_pods.go:89] "nvidia-device-plugin-daemonset-74jk8" [1ee77706-ccb3-4b2a-a745-fb66b3b18f87] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 19:48:24.137263   14168 system_pods.go:89] "registry-creds-764b6fb674-twppm" [74ef19e4-c758-408f-b5f2-9ce052284482] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 19:48:24.137271   14168 system_pods.go:126] duration metric: took 17.853037ms to wait for k8s-apps to be running ...
	I1002 19:48:24.137279   14168 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 19:48:24.137328   14168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:48:24.494309   14168 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-355008" context rescaled to 1 replicas
	I1002 19:48:24.594346   14168 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 19:48:24.594367   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 19:48:24.753121   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.32046499s)
	I1002 19:48:24.753193   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:24.753206   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:24.753505   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:24.753522   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:24.753529   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:24.753541   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:24.753549   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:24.753805   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:24.753814   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:24.753818   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:25.146842   14168 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 19:48:25.146866   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 19:48:25.609496   14168 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 19:48:25.609519   14168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 19:48:25.925417   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 19:48:26.409581   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.947000748s)
	I1002 19:48:26.409632   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:26.409644   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:26.409963   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:26.409985   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:26.409995   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:26.410004   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:26.410238   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:26.410261   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:26.410288   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:27.717172   14168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 19:48:27.717221   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:27.720681   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:27.721225   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:27.721251   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:27.721533   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:27.721752   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:27.721905   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:27.722045   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:27.930764   14168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 19:48:28.051513   14168 addons.go:238] Setting addon gcp-auth=true in "addons-355008"
	I1002 19:48:28.051579   14168 host.go:66] Checking if "addons-355008" exists ...
	I1002 19:48:28.052093   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:28.052135   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:28.066231   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38155
	I1002 19:48:28.066775   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:28.067252   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:28.067270   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:28.067594   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:28.068262   14168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:48:28.068306   14168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:48:28.082523   14168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I1002 19:48:28.083095   14168 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:48:28.083635   14168 main.go:141] libmachine: Using API Version  1
	I1002 19:48:28.083665   14168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:48:28.084063   14168 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:48:28.084270   14168 main.go:141] libmachine: (addons-355008) Calling .GetState
	I1002 19:48:28.085949   14168 main.go:141] libmachine: (addons-355008) Calling .DriverName
	I1002 19:48:28.086180   14168 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 19:48:28.086207   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHHostname
	I1002 19:48:28.089364   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:28.089889   14168 main.go:141] libmachine: (addons-355008) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:f0:cc", ip: ""} in network mk-addons-355008: {Iface:virbr1 ExpiryTime:2025-10-02 20:47:51 +0000 UTC Type:0 Mac:52:54:00:33:f0:cc Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-355008 Clientid:01:52:54:00:33:f0:cc}
	I1002 19:48:28.089912   14168 main.go:141] libmachine: (addons-355008) DBG | domain addons-355008 has defined IP address 192.168.39.211 and MAC address 52:54:00:33:f0:cc in network mk-addons-355008
	I1002 19:48:28.090133   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHPort
	I1002 19:48:28.090335   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHKeyPath
	I1002 19:48:28.090546   14168 main.go:141] libmachine: (addons-355008) Calling .GetSSHUsername
	I1002 19:48:28.090703   14168 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/addons-355008/id_rsa Username:docker}
	I1002 19:48:29.629141   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.153150497s)
	I1002 19:48:29.629196   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.629198   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.07289772s)
	I1002 19:48:29.629249   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.629254   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.919744296s)
	I1002 19:48:29.629267   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.629275   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.629284   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.629209   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.629351   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.881069505s)
	I1002 19:48:29.629388   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.629401   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.629399   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.804907033s)
	I1002 19:48:29.629423   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.629435   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.629513   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.780614456s)
	I1002 19:48:29.629534   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.629542   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.629609   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.773997675s)
	I1002 19:48:29.629623   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.629629   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.629735   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.226134072s)
	W1002 19:48:29.629756   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:29.629781   14168 retry.go:31] will retry after 309.511526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:29.629826   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.044499283s)
	I1002 19:48:29.629840   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.629847   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.629939   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.467560047s)
	I1002 19:48:29.629951   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.629959   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.630026   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.277384467s)
	I1002 19:48:29.630038   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.630046   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.630155   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.630172   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.630172   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.630186   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.630187   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.630196   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.630204   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.630207   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.630215   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.630221   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.630227   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.630268   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.630274   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.630281   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.630287   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.630334   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.630352   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.630360   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.630367   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.630373   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.630419   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.630437   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.630443   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.630450   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.630456   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.630561   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.630594   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.630601   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.630609   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.630616   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.630649   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.630687   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.630695   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.630705   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.630717   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.630797   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.630821   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.630828   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.632179   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.632218   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.632225   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.632233   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.632244   14168 addons.go:479] Verifying addon registry=true in "addons-355008"
	I1002 19:48:29.632250   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.632463   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.632498   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.632505   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.632514   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.632520   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.632569   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.632586   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.632592   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.632632   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.632640   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.632962   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.632974   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.632986   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.632997   14168 addons.go:479] Verifying addon ingress=true in "addons-355008"
	I1002 19:48:29.635020   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.635024   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.635049   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.635183   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.635192   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.635200   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.635071   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.635237   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.635087   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.635282   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.635293   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.635301   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.635096   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.635516   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.635542   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.635554   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.635942   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.635984   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.635991   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.635999   14168 addons.go:479] Verifying addon metrics-server=true in "addons-355008"
	I1002 19:48:29.636150   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.636172   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.636191   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.637748   14168 out.go:179] * Verifying registry addon...
	I1002 19:48:29.637767   14168 out.go:179] * Verifying ingress addon...
	I1002 19:48:29.638617   14168 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-355008 service yakd-dashboard -n yakd-dashboard
	
	I1002 19:48:29.639881   14168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 19:48:29.640136   14168 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 19:48:29.672839   14168 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 19:48:29.672860   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:29.675391   14168 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 19:48:29.675408   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:29.721908   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.721933   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.722196   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:29.722221   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.722237   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.738993   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:29.739018   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:29.739371   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:29.739389   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:29.739404   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	W1002 19:48:29.739510   14168 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1002 19:48:29.812635   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.021125669s)
	I1002 19:48:29.812670   14168 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.675312193s)
	W1002 19:48:29.812687   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 19:48:29.812701   14168 system_svc.go:56] duration metric: took 5.67541505s WaitForService to wait for kubelet
	I1002 19:48:29.812711   14168 retry.go:31] will retry after 176.918408ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 19:48:29.812709   14168 kubeadm.go:586] duration metric: took 9.647418766s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 19:48:29.812745   14168 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:48:29.883411   14168 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 19:48:29.883438   14168 node_conditions.go:123] node cpu capacity is 2
	I1002 19:48:29.883451   14168 node_conditions.go:105] duration metric: took 70.699837ms to run NodePressure ...
	I1002 19:48:29.883461   14168 start.go:242] waiting for startup goroutines ...
	I1002 19:48:29.939490   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:29.990687   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 19:48:30.181092   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:30.185829   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:30.690861   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:30.691546   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:30.876747   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.951255397s)
	I1002 19:48:30.876798   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:30.876808   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:30.876752   14168 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.790547327s)
	I1002 19:48:30.877081   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:30.877098   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:30.877107   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:30.877114   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:30.877341   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:30.877362   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:30.877374   14168 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-355008"
	I1002 19:48:30.877401   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:30.879117   14168 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 19:48:30.879122   14168 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 19:48:30.880187   14168 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 19:48:30.880782   14168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 19:48:30.881225   14168 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 19:48:30.881254   14168 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 19:48:30.891493   14168 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 19:48:30.891522   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:31.074080   14168 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 19:48:31.074113   14168 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 19:48:31.149157   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:31.152575   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:31.207309   14168 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 19:48:31.207328   14168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 19:48:31.368429   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 19:48:31.387117   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:31.662627   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:31.664989   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:31.886696   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:32.145930   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:32.146415   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:32.390554   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:32.651198   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:32.651908   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:32.885399   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:33.145614   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:33.147316   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:33.405206   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.46567485s)
	W1002 19:48:33.405251   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:33.405284   14168 retry.go:31] will retry after 269.282843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:33.405290   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.414567168s)
	I1002 19:48:33.405332   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:33.405350   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:33.405669   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:33.405690   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:33.405709   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:33.405718   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:33.405946   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:33.405966   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:33.429190   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.060718155s)
	I1002 19:48:33.429258   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:33.429295   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:33.429609   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:48:33.429666   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:33.429679   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:33.429692   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:48:33.429700   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:48:33.429917   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:48:33.429931   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:48:33.430970   14168 addons.go:479] Verifying addon gcp-auth=true in "addons-355008"
	I1002 19:48:33.432597   14168 out.go:179] * Verifying gcp-auth addon...
	I1002 19:48:33.434397   14168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 19:48:33.446479   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:33.463259   14168 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 19:48:33.463277   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:33.652434   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:33.653816   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:33.674966   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:33.885783   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:33.943069   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:34.145128   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:34.146433   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:34.419105   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:34.439771   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:34.647337   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:34.650610   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:34.885902   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:34.940348   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:35.146094   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:35.150393   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:35.378522   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.703514638s)
	W1002 19:48:35.378567   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:35.378597   14168 retry.go:31] will retry after 562.64243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:35.387210   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:35.440798   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:35.649798   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:35.654177   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:35.887997   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:35.937747   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:35.941871   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:36.147643   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:36.148062   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:36.384814   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:36.440073   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:36.651372   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:36.651390   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:36.887000   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:36.940287   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:37.147647   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:37.147815   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:37.210845   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.268936069s)
	W1002 19:48:37.210907   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:37.210934   14168 retry.go:31] will retry after 461.587204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:37.386607   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:37.440171   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:37.646623   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:37.646804   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:37.672843   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:37.889338   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:37.939817   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:38.146346   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:38.146372   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:38.386357   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:38.440464   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:38.646497   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:38.648892   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:38.884796   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:38.942497   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:39.046381   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.3734718s)
	W1002 19:48:39.046425   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:39.046447   14168 retry.go:31] will retry after 1.342080741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:39.147979   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:39.149609   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:39.386718   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:39.457886   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:39.652062   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:39.652678   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:39.886361   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:39.938340   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:40.146075   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:40.147889   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:40.386066   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:40.389068   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:40.441588   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:40.652949   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:40.653051   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:40.886130   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:40.938633   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:41.144880   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:41.146372   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:41.387143   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:41.439092   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:41.633930   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.244822342s)
	W1002 19:48:41.633967   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:41.633985   14168 retry.go:31] will retry after 2.458168204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:41.648629   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:41.649178   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:41.890371   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:41.938920   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:42.146274   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:42.147492   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:42.385931   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:42.442148   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:42.645875   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:42.647778   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:43.261242   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:43.262252   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:43.262563   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:43.263715   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:43.385419   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:43.440111   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:43.650565   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:43.652206   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:43.888507   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:43.939632   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:44.093072   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:44.145600   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:44.145897   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:44.385833   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:44.438231   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:44.656170   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:44.656465   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:44.889123   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:44.939282   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:45.149090   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:45.149267   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:45.850514   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:45.851139   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:45.851207   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.758100019s)
	W1002 19:48:45.851228   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:45.851245   14168 retry.go:31] will retry after 2.889450301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:45.852321   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:45.852475   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:45.945416   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:45.945538   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:46.145234   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:46.146751   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:46.384131   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:46.438223   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:46.650439   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:46.653870   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:46.885708   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:46.939089   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:47.143557   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:47.145930   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:47.385370   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:47.438410   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:47.650483   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:47.653788   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:47.888050   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:47.938144   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:48.147122   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:48.150028   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:48.386996   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:48.440701   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:48.652128   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:48.652151   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:48.741377   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:48.886070   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:48.937971   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:49.143758   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:49.144069   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:49.388443   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:49.438673   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 19:48:49.528374   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:49.528415   14168 retry.go:31] will retry after 5.862916213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:49.644101   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:49.644149   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:49.884514   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:49.938754   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:50.144945   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:50.145706   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:50.385330   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:50.438376   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:50.646256   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:50.649017   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:50.885538   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:50.939993   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:51.143934   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:51.144191   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:51.386509   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:51.438578   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:51.648226   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:51.648359   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:51.885638   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:51.985783   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:52.144403   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:52.144935   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:52.385345   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:52.438476   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:52.644315   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:52.644456   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:52.885653   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:52.937550   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:53.144402   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:53.145595   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:53.384735   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:53.437444   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:53.644051   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:53.645127   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:53.884488   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:53.938728   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:54.144777   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:54.145267   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:54.384100   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:54.439148   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:54.645577   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:54.646012   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:54.885219   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:54.939172   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:55.143490   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:55.144994   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:55.384429   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:55.392397   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:48:55.438652   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:55.661616   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:55.665655   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:55.885993   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:55.984606   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:56.156072   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:56.156349   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:56.384698   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:56.438716   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:56.575408   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.182977483s)
	W1002 19:48:56.575446   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:56.575464   14168 retry.go:31] will retry after 3.560208682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:48:56.648748   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:56.651000   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:56.887233   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:56.939184   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:57.148511   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:57.148680   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:57.384355   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:57.438978   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:57.643938   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:57.645567   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:57.885880   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:57.941360   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:58.150256   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:58.153640   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:58.385941   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:58.440208   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:58.651972   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:58.656050   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:58.885869   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:58.937675   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:59.145356   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:59.146019   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:59.386427   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:59.437940   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:48:59.643439   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:48:59.645612   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:48:59.889548   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:48:59.938152   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:00.136468   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:49:00.151886   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:00.152125   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:00.385934   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:00.439146   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:00.646494   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:00.649083   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:00.885504   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:00.937867   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:01.143893   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:01.144258   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:01.547462   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:01.551174   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:01.554516   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.417987212s)
	W1002 19:49:01.554555   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:49:01.554578   14168 retry.go:31] will retry after 5.922037592s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:49:01.646377   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:01.646598   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:01.886908   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:01.938598   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:02.148259   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:02.148289   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:02.389062   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:02.438940   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:02.651005   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:02.653209   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:02.893741   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:02.938523   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:03.144383   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:03.145908   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:03.388091   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:03.438479   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:03.645506   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:03.646520   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:03.885894   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:03.938269   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:04.144748   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:04.144851   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:04.387834   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:04.440014   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:04.648404   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:04.651795   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:04.886429   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:04.938130   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:05.145742   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:05.145868   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:05.570509   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:05.570798   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:05.645882   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:05.646692   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:05.885779   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:05.986039   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:06.145132   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:06.145471   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:06.385429   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:06.487081   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:06.646279   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:06.648637   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:06.885957   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:06.937780   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:07.144549   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:07.146695   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:07.384235   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:07.439129   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:07.477330   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:49:07.648838   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:07.649679   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:07.885093   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:07.939439   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:08.149157   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:08.150272   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 19:49:08.235106   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:49:08.235144   14168 retry.go:31] will retry after 9.550008928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:49:08.387459   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:08.440151   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:08.647170   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:08.647947   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:08.885901   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:08.940193   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:09.146098   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:09.147171   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:09.392510   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:09.439865   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:09.648065   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:09.648156   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:09.885911   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:09.941162   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:10.146610   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 19:49:10.147112   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:10.385150   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:10.453173   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:10.644744   14168 kapi.go:107] duration metric: took 41.00484372s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 19:49:10.644765   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:10.884475   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:10.939320   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:11.143870   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:11.385446   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:11.439150   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:11.643611   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:11.884983   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:11.938050   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:12.144540   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:12.385241   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:12.439058   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:12.644641   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:12.884659   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:12.942667   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:13.145874   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:13.386220   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:13.439191   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:13.645071   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:13.888037   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:13.938006   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:14.146260   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:14.386384   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:14.439139   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:14.649368   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:14.890470   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:14.943198   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:15.145486   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:15.386106   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:15.440226   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:15.644695   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:15.886336   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:15.940806   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:16.150893   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:16.389230   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:16.443543   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:17.077909   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:17.078735   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:17.080201   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:17.144031   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:17.385141   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:17.437707   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:17.644985   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:17.786179   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:49:17.886027   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:17.939159   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:18.149158   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:18.386872   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:18.441053   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:18.650062   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:18.887668   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:18.904770   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.118544767s)
	W1002 19:49:18.904825   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:49:18.904856   14168 retry.go:31] will retry after 21.667083621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:49:18.940571   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:19.148193   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:19.385467   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:19.443762   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:19.661843   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:19.885774   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:19.939854   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:20.399098   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:20.399347   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:20.441937   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:20.650368   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:20.884793   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:20.940874   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:21.145676   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:21.384174   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:21.439466   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:21.649215   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:21.887208   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:21.939205   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:22.145628   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:22.385331   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:22.439169   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:22.643627   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:22.884826   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:22.937790   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:23.144163   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:23.388466   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:23.438692   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:23.646350   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:23.885330   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:23.945235   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:24.145920   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:24.388274   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:24.488760   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:24.644550   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:24.890815   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:24.937376   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:25.144046   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:25.385967   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:25.438967   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:25.644432   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:25.885813   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:25.937839   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:26.145161   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:26.384929   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:26.439277   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:26.647962   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:26.887925   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:26.937394   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:27.144273   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:27.387158   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:27.439812   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:27.652551   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:27.887852   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:27.945665   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:28.145406   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:28.389284   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:28.439867   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:28.645552   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:28.885521   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:28.941403   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:29.144428   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:29.385441   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:29.441451   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:29.646627   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:29.888829   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:29.943446   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:30.144068   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:30.385141   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:30.438170   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:30.822021   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:30.886432   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:30.986992   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:31.144899   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:31.384561   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:31.437714   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:31.644447   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:31.886358   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:31.939035   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:32.144505   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:32.385144   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:32.438053   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:32.647319   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:32.892890   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:32.942657   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:33.146613   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:33.392300   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:33.438358   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:33.650585   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:33.885624   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:33.940126   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:34.145750   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:34.388168   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:34.441026   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:34.647560   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:34.892389   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:34.939407   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:35.143782   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:35.384750   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:35.437643   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:35.647775   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:35.886281   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:35.940166   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:36.147182   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:36.385910   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:36.437823   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:36.643882   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:36.886782   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:36.938057   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:37.144318   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:37.389116   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:37.439554   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:37.648497   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:37.885909   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:37.938339   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:38.143567   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:38.386395   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:38.438758   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:38.644854   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:38.884780   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:38.940197   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:39.147012   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:39.384395   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:39.439718   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:39.646905   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:39.887656   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:39.939223   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:40.145806   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:40.385515   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:40.444697   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:40.572913   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:49:40.650602   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:40.887859   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:40.989993   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:41.156792   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:41.389930   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:41.445374   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:41.649172   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:41.856391   14168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.283421422s)
	W1002 19:49:41.856433   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:49:41.856454   14168 retry.go:31] will retry after 27.205546876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:49:41.886582   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:41.938860   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:42.148257   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:42.384841   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:42.438073   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:42.649573   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:42.885776   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:42.943228   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:43.161035   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:43.388964   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:43.489177   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:43.644444   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:43.885896   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:43.940802   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:44.147137   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:44.387712   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:44.437681   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:44.645462   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:44.887499   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:44.937676   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:45.144692   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:45.386645   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:45.439874   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:45.648317   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:45.886760   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:45.938871   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:46.147319   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:46.385662   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:46.442249   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:46.643865   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:46.884051   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:46.938744   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:47.145909   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:47.384944   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:47.438765   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:47.647268   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:47.894893   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:47.944279   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:48.143546   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:48.400700   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:48.439496   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:48.644959   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:48.884298   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:48.938092   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:49.150201   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:49.388932   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:49.674607   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:49.674659   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:49.886579   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:49.941937   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:50.146245   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:50.386361   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:50.489661   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:50.644054   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:50.887082   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:50.939261   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:51.155926   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:51.426291   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:51.439088   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:51.644451   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:51.886523   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:51.939261   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:52.145211   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:52.385366   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:52.439439   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:52.644065   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:52.885690   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:52.940388   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:53.146520   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:53.385753   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:53.437795   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:53.644848   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:53.885209   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:53.939386   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:54.144534   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:54.386277   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:54.442140   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:54.650222   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:54.884920   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:54.937913   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:55.144785   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:55.384923   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 19:49:55.438078   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:55.650265   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:55.885211   14168 kapi.go:107] duration metric: took 1m25.004429254s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 19:49:55.937986   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:56.146073   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:56.439797   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:56.644592   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:56.938287   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:57.143864   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:57.438376   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:57.645337   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:57.937822   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:58.145571   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:58.437899   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:58.645553   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:58.937843   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:59.144842   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:59.437215   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:49:59.644854   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:49:59.938062   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:00.143747   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:00.437828   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:00.645673   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:00.938654   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:01.144232   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:01.438956   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:01.645193   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:01.937919   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:02.144532   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:02.438353   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:02.644320   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:02.937818   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:03.144670   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:03.438160   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:03.644124   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:03.938493   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:04.144770   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:04.438404   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:04.644329   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:04.938923   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:05.146855   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:05.438433   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:05.647168   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:05.939043   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:06.144268   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:06.437531   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:06.644904   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:06.938332   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:07.143831   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:07.438355   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:07.644872   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:07.938388   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:08.144913   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:08.438544   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:08.644135   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:08.938460   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:09.062985   14168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 19:50:09.146864   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:09.439015   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:09.646098   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1002 19:50:09.844565   14168 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 19:50:09.844648   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:50:09.844668   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:50:09.844933   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:50:09.844951   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:50:09.844958   14168 main.go:141] libmachine: Making call to close driver server
	I1002 19:50:09.844966   14168 main.go:141] libmachine: (addons-355008) Calling .Close
	I1002 19:50:09.844986   14168 main.go:141] libmachine: (addons-355008) DBG | Closing plugin on server side
	I1002 19:50:09.845194   14168 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:50:09.845210   14168 main.go:141] libmachine: Making call to close connection to plugin binary
	W1002 19:50:09.845312   14168 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 19:50:09.938816   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:10.147383   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:10.438707   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:10.645339   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:10.939513   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:11.145260   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:11.438093   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:11.645055   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:11.938736   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:12.144499   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:12.438113   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:12.644164   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:12.938757   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:13.144864   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:13.437972   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:13.645988   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:13.938276   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:14.144598   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:14.438575   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:14.646135   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:14.938647   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:15.144574   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:15.438618   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:15.645584   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:15.939714   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:16.144754   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:16.438042   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:16.645248   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:16.938373   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:17.144106   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:17.438617   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:17.646185   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:17.938985   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:18.144901   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:18.439499   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:18.644163   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:18.939290   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:19.145133   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:19.437878   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:19.645803   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:19.938803   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:20.146918   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:20.438515   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:20.646163   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:20.938790   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:21.147082   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:21.438556   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:21.644788   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:21.938522   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:22.144838   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:22.439588   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:22.644429   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:22.938019   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:23.144071   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:23.438962   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:23.645435   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:23.938397   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:24.144642   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:24.438582   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:24.644441   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:24.938409   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:25.145161   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:25.438773   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:25.645089   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:25.939148   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:26.143420   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:26.437659   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:26.644399   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:26.937671   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:27.144320   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:27.437367   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:27.871075   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:27.938964   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:28.145479   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:28.438336   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:28.644071   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:28.938782   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:29.144825   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:29.437996   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:29.645141   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:29.940189   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:30.143707   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:30.439108   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:30.645129   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:30.939699   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:31.145448   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:31.438316   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:31.648678   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:31.938549   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:32.144511   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:32.438334   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:32.644950   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:32.937705   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:33.145152   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:33.438564   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:33.645153   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:33.939873   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:34.143855   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:34.437234   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:34.645238   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:34.938336   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:35.145537   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:35.438198   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:35.644736   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:35.938613   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:36.144809   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:36.437958   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:36.645016   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:36.938367   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:37.144556   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:37.438314   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:37.643804   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:37.941646   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:38.144706   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:38.439047   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:38.644856   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:38.938418   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:39.144636   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:39.437807   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:39.644617   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:39.938292   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:40.146004   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:40.438285   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:40.646873   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:40.939008   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:41.145145   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:41.438656   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:41.645532   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:41.937686   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:42.144899   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:42.439347   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:42.644564   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:42.937961   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:43.144500   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:43.438315   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:43.644673   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:43.937781   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:44.146150   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:44.438148   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:44.643759   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:44.937879   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:45.144838   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:45.438055   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:45.644021   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:45.940807   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:46.145819   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:46.439380   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:46.644929   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:46.943091   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:47.148656   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:47.439167   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:47.643924   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:47.938600   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:48.144279   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:48.440714   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:48.648884   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:48.938034   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:49.151209   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:49.444887   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:49.645615   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:49.941709   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:50.148153   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:50.439184   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:50.649984   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:51.224061   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:51.224279   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:51.439140   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:51.647401   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:51.945471   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:52.146120   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:52.439305   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:52.650358   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:52.938931   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:53.145683   14168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 19:50:53.438440   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:53.645367   14168 kapi.go:107] duration metric: took 2m24.005226631s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 19:50:53.937681   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:54.437470   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:54.938459   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:55.440759   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:55.940885   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:56.449644   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:56.938878   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:57.438341   14168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 19:50:57.938984   14168 kapi.go:107] duration metric: took 2m24.504583032s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 19:50:57.940501   14168 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-355008 cluster.
	I1002 19:50:57.942036   14168 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 19:50:57.943366   14168 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 19:50:57.944743   14168 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1002 19:50:57.946028   14168 addons.go:514] duration metric: took 2m37.78069277s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner amd-gpu-device-plugin nvidia-device-plugin registry-creds metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1002 19:50:57.946093   14168 start.go:247] waiting for cluster config update ...
	I1002 19:50:57.946117   14168 start.go:256] writing updated cluster config ...
	I1002 19:50:57.946505   14168 ssh_runner.go:195] Run: rm -f paused
	I1002 19:50:57.955513   14168 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 19:50:57.960025   14168 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-554hr" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:57.970402   14168 pod_ready.go:94] pod "coredns-66bc5c9577-554hr" is "Ready"
	I1002 19:50:57.970429   14168 pod_ready.go:86] duration metric: took 10.375422ms for pod "coredns-66bc5c9577-554hr" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:57.973363   14168 pod_ready.go:83] waiting for pod "etcd-addons-355008" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:57.980549   14168 pod_ready.go:94] pod "etcd-addons-355008" is "Ready"
	I1002 19:50:57.980575   14168 pod_ready.go:86] duration metric: took 7.191325ms for pod "etcd-addons-355008" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:57.982716   14168 pod_ready.go:83] waiting for pod "kube-apiserver-addons-355008" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:57.988392   14168 pod_ready.go:94] pod "kube-apiserver-addons-355008" is "Ready"
	I1002 19:50:57.988421   14168 pod_ready.go:86] duration metric: took 5.668602ms for pod "kube-apiserver-addons-355008" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:57.991172   14168 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-355008" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:58.360941   14168 pod_ready.go:94] pod "kube-controller-manager-addons-355008" is "Ready"
	I1002 19:50:58.360984   14168 pod_ready.go:86] duration metric: took 369.790669ms for pod "kube-controller-manager-addons-355008" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:58.560322   14168 pod_ready.go:83] waiting for pod "kube-proxy-r78bp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:58.961225   14168 pod_ready.go:94] pod "kube-proxy-r78bp" is "Ready"
	I1002 19:50:58.961257   14168 pod_ready.go:86] duration metric: took 400.903334ms for pod "kube-proxy-r78bp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:59.161269   14168 pod_ready.go:83] waiting for pod "kube-scheduler-addons-355008" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:59.560426   14168 pod_ready.go:94] pod "kube-scheduler-addons-355008" is "Ready"
	I1002 19:50:59.560452   14168 pod_ready.go:86] duration metric: took 399.160919ms for pod "kube-scheduler-addons-355008" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 19:50:59.560462   14168 pod_ready.go:40] duration metric: took 1.604910673s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 19:50:59.608051   14168 start.go:627] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1002 19:50:59.610640   14168 out.go:179] * Done! kubectl is now configured to use "addons-355008" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.639578279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41948aab-2bd1-47ad-a35f-8c52c46c125a name=/runtime.v1.RuntimeService/Version
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.639692705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41948aab-2bd1-47ad-a35f-8c52c46c125a name=/runtime.v1.RuntimeService/Version
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.642021839Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6efb9f64-7928-4935-8a16-8eaf1de84525 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.643395205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759434826643368029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598015,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6efb9f64-7928-4935-8a16-8eaf1de84525 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.644530714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe144225-f6b1-48ea-8a5b-1546d2ebfa78 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.644592848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe144225-f6b1-48ea-8a5b-1546d2ebfa78 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.644907701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3bd3b0a96995b770acbfbe00eec32e8584dc4cf55e5fc7cd147b350b5c0be9,PodSandboxId:74004d285ab28b303a35b943b9f3306a19cd5b43858e5113114f6420fb310ca4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759434684332909517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3520043-753a-461f-bc3a-d85b4271f2da,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e3cc1da0bd8bcf73ebeddd16de0d7ec017dbf4c514b16c3e6a4900b61ed54f,PodSandboxId:885b30b379ec975f93dc9129d62e2d61742b4782b387c8f1993c418331011fc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759434664122433762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3e6724-fe44-444c-92cc-4c9f950e8e37,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee3f5162a6de08fa47584ac2789f5bfe7ca2b24adf4e3c006b73fef5567b76a,PodSandboxId:cf0c70bdb024cb52954fe1b3cab08c912cf9703ee42f9e39bf9c238411f355c0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759434652734058011,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-76vsl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d34f9dd-caff-417e-8a09-ad90678835d5,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6281fd9d6e01b246b03e03c81d96caa8b6d6e9a750efa153bec3894ba606ab6d,PodSandboxId:eec4a0315671c3e6260896981e48ce0ba0261963320d54764a50a0e928a19ac0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759434580572565163,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-pkrgf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c23fdd12-b43b-40d9-bf25-7ad78e474a66,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e5701c76a93f328332356a269778ebd400d956ca26da10c4b59dc9cece9d4c,PodSandboxId:10c83e5bcca43d16c3e4f7b9a148a8584b691431c94e372ad3f1f6a64e51b950,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759434578184674104,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kdtl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80b479c8-8fe5-41db-b6e9-a92b005be31e,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081fc89c97049e01c2470036e5def68aaef7183773712886701bf25bbe87ba0f,PodSandboxId:2acdb4da480e9c83da414eadc7a99fb6a3a50a05abfb0279e719eda32e035a19,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759434576130528211,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-l8bhx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fb92b572-893f-40b4-a330-fd17d21d0ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5868518a595ebbcbe0aa5c6da2f35dca166161d8a2ce7b247663ff0226580b,PodSandboxId:f6597a747a3c8c885673082884eda507936d8ae21888825e921d7544e3368d93,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759434560941619288,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a37f97-906f-4f6a-9828-e480976a10fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e046f298f6ec5d55be885f6210b2481e2150343f8a4c28bd830810afce9685e,PodSandboxId:0d58b75ada84342c6fa934ff9a9075825796066ffa21110
2dd0f3328359388b2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759434531689844101,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jmpmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d71dc057-594c-482e-9d24-56aa5d3609e8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfa9e439dc127ce9b227b3d6a8c5a3ed00fa0d69a9e1ac74fba13939fc9cf1df,PodSandboxId:494e38e
5fd6b6a7db7be3f2141301866302f27601e58db738d7399d8fa914cc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759434509468621427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1fb9a7-a4b0-4c45-84d0-e8eff5bb983c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a46deea32352d17a0c3b676f0a6a5d88a89ffabc1318a8b1789f7477ff46a6c,PodSandboxId:ae9064556a868f13629
b18c6a8494daa9fc50d7c1bc2363c4f9c04611a1f2c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759434501532221177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-554hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284d6756-53c9-4606-b85a-9a9a034a7f4f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3241debf7f0c9cd369032f7093b6b77e68df0ca98cdbc47a49afa98bff55eb40,PodSandboxId:f3ba8767848422331c1d4d017012cde114e0b49cd4483b26230c79266396adff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759434500828071383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r78bp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2dc40f2-56d6-47ef-a820-5576f28a1c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a62f52f74c7a1cd7c2da70815796003fb22d75f8ef96d32651d4c84400d2d60,PodSandboxId:0ed09d7b62a328451932217aa815ffe29102767571527b5b6cefe41275fc462e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759434488806604933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddf65b576167ac1bd9ac08492d623a36,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65dd6bebcaa045921a9fe9ea1b6149ae0a647d6fc77c2f0e7fc6863967d8c7e6,PodSandboxId:74964e95d96a3770be98fa15b9627dc5093e003d2f9e3f7050b035b34bf102c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759434488840256177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb
c9ff65694338e015d5cba6d60a8c3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9961b87bd3412ca159b61217e60547f4028a4fbe1c3c4f2d95a41375ae44c67b,PodSandboxId:fc1ddd6f3c3cce0dc8d8cf10de50c9a2770a1432f82cfd63f211ddbc47affd65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759434488778282101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addo
ns-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf829f7ac02b97adb4c6c60623350c08,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc3de843106f2a1d51737d2b5073d398032e3c4ea587ef986a7c0a9ec399354,PodSandboxId:f5fc9cb627abbf81f6b7ab94f41af7b1fc80c11d4515cff09294ebb67d77332e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759434488786287274,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceafbaa62e14166b54391f6300c58550,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe144225-f6b1-48ea-8a5b-1546d2ebfa78 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.685661060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1a71af5-a432-4f74-8eda-8a3e4c8da415 name=/runtime.v1.RuntimeService/Version
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.685758545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1a71af5-a432-4f74-8eda-8a3e4c8da415 name=/runtime.v1.RuntimeService/Version
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.687986550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e270a751-a324-4ffa-9afc-09de108889cc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.690055630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759434826690026473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598015,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e270a751-a324-4ffa-9afc-09de108889cc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.691004010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c776feb0-3128-432f-9ac7-0562141f97f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.691098073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c776feb0-3128-432f-9ac7-0562141f97f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.691469161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3bd3b0a96995b770acbfbe00eec32e8584dc4cf55e5fc7cd147b350b5c0be9,PodSandboxId:74004d285ab28b303a35b943b9f3306a19cd5b43858e5113114f6420fb310ca4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759434684332909517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3520043-753a-461f-bc3a-d85b4271f2da,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e3cc1da0bd8bcf73ebeddd16de0d7ec017dbf4c514b16c3e6a4900b61ed54f,PodSandboxId:885b30b379ec975f93dc9129d62e2d61742b4782b387c8f1993c418331011fc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759434664122433762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3e6724-fe44-444c-92cc-4c9f950e8e37,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee3f5162a6de08fa47584ac2789f5bfe7ca2b24adf4e3c006b73fef5567b76a,PodSandboxId:cf0c70bdb024cb52954fe1b3cab08c912cf9703ee42f9e39bf9c238411f355c0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759434652734058011,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-76vsl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d34f9dd-caff-417e-8a09-ad90678835d5,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6281fd9d6e01b246b03e03c81d96caa8b6d6e9a750efa153bec3894ba606ab6d,PodSandboxId:eec4a0315671c3e6260896981e48ce0ba0261963320d54764a50a0e928a19ac0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759434580572565163,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-pkrgf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c23fdd12-b43b-40d9-bf25-7ad78e474a66,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e5701c76a93f328332356a269778ebd400d956ca26da10c4b59dc9cece9d4c,PodSandboxId:10c83e5bcca43d16c3e4f7b9a148a8584b691431c94e372ad3f1f6a64e51b950,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759434578184674104,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kdtl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80b479c8-8fe5-41db-b6e9-a92b005be31e,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081fc89c97049e01c2470036e5def68aaef7183773712886701bf25bbe87ba0f,PodSandboxId:2acdb4da480e9c83da414eadc7a99fb6a3a50a05abfb0279e719eda32e035a19,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759434576130528211,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-l8bhx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fb92b572-893f-40b4-a330-fd17d21d0ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5868518a595ebbcbe0aa5c6da2f35dca166161d8a2ce7b247663ff0226580b,PodSandboxId:f6597a747a3c8c885673082884eda507936d8ae21888825e921d7544e3368d93,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759434560941619288,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a37f97-906f-4f6a-9828-e480976a10fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e046f298f6ec5d55be885f6210b2481e2150343f8a4c28bd830810afce9685e,PodSandboxId:0d58b75ada84342c6fa934ff9a9075825796066ffa21110
2dd0f3328359388b2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759434531689844101,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jmpmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d71dc057-594c-482e-9d24-56aa5d3609e8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfa9e439dc127ce9b227b3d6a8c5a3ed00fa0d69a9e1ac74fba13939fc9cf1df,PodSandboxId:494e38e
5fd6b6a7db7be3f2141301866302f27601e58db738d7399d8fa914cc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759434509468621427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1fb9a7-a4b0-4c45-84d0-e8eff5bb983c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a46deea32352d17a0c3b676f0a6a5d88a89ffabc1318a8b1789f7477ff46a6c,PodSandboxId:ae9064556a868f13629
b18c6a8494daa9fc50d7c1bc2363c4f9c04611a1f2c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759434501532221177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-554hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284d6756-53c9-4606-b85a-9a9a034a7f4f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3241debf7f0c9cd369032f7093b6b77e68df0ca98cdbc47a49afa98bff55eb40,PodSandboxId:f3ba8767848422331c1d4d017012cde114e0b49cd4483b26230c79266396adff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759434500828071383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r78bp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2dc40f2-56d6-47ef-a820-5576f28a1c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a62f52f74c7a1cd7c2da70815796003fb22d75f8ef96d32651d4c84400d2d60,PodSandboxId:0ed09d7b62a328451932217aa815ffe29102767571527b5b6cefe41275fc462e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759434488806604933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddf65b576167ac1bd9ac08492d623a36,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65dd6bebcaa045921a9fe9ea1b6149ae0a647d6fc77c2f0e7fc6863967d8c7e6,PodSandboxId:74964e95d96a3770be98fa15b9627dc5093e003d2f9e3f7050b035b34bf102c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759434488840256177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb
c9ff65694338e015d5cba6d60a8c3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9961b87bd3412ca159b61217e60547f4028a4fbe1c3c4f2d95a41375ae44c67b,PodSandboxId:fc1ddd6f3c3cce0dc8d8cf10de50c9a2770a1432f82cfd63f211ddbc47affd65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759434488778282101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addo
ns-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf829f7ac02b97adb4c6c60623350c08,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc3de843106f2a1d51737d2b5073d398032e3c4ea587ef986a7c0a9ec399354,PodSandboxId:f5fc9cb627abbf81f6b7ab94f41af7b1fc80c11d4515cff09294ebb67d77332e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759434488786287274,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceafbaa62e14166b54391f6300c58550,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c776feb0-3128-432f-9ac7-0562141f97f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.716802353Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.717377736Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.729901225Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7284f68d-ba3e-4475-af3b-31d307b97eb3 name=/runtime.v1.ImageService/ListImages
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.731811943Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,RepoTags:[registry.k8s.io/kube-apiserver:v1.34.1],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964 registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902],Size_:89046001,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,RepoTags:[registry.k8s.io/kube-controller-manager:v1.34.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89 registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992],Size_:76004181,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image
{Id:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,RepoTags:[registry.k8s.io/kube-scheduler:v1.34.1],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31 registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500],Size_:53844823,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,RepoTags:[registry.k8s.io/kube-proxy:v1.34.1],RepoDigests:[registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a],Size_:73138073,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f,RepoTags:[registry.k8s.io/pause:3.10.1],RepoDigests:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24
c registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41],Size_:742092,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,RepoTags:[registry.k8s.io/etcd:3.6.4-0],RepoDigests:[registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19],Size_:195976448,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,RepoTags:[registry.k8s.io/coredns/coredns:v1.12.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998 registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c],Size_:76103547,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de53
0d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c,RepoTags:[docker.io/kindest/kindnetd:v20250512-df8de77b],RepoDigests:[docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11],Size_:109379124,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:fcbf0ecf3195887f4b6b497d542660d9e7b1409b502bfddc284c04e3d8155f57,RepoTags:[],RepoDigests:[nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd nvcr.io/nvidia/k8s-device-plugin@sha256:a
d155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32],Size_:730848593,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,RepoTags:[],RepoDigests:[docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f],Size_:26765047,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:bd5dce5cbea6ec9ae9a29369516af2dd4cd06289a6c34bb9118b44184a2df56c,RepoTags:[],RepoDigests:[gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58 gcr.io/cloud-spanner-emulator/emulator@sha256:37b616a24f4d1d6e7d1cd85523d2726eabc2bbcdfbbb097f18fa1e63728ba83e],Size_:151589272,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:b9e1e3849e07022817ebc1612858382f0c0b91d00e4dcd2996adc1df6ced26e9,RepoTags:[],RepoDigests:[registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1 registry.k8s.io/metri
cs-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2],Size_:83737288,Uid:&Int64Value{Value:65534,},Username:,Spec:nil,Pinned:false,},&Image{Id:3c52eedeec804bef2771a5ea8871d31f61d61129050469324ccb8a51890cbe16,RepoTags:[],RepoDigests:[docker.io/library/registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d docker.io/library/registry@sha256:42be4a75b921489e51574e12889b71484a6524a02c4008c52c6f26ce30c7b990],Size_:58241277,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac gcr.io/k8s-minikube/kube-registry-proxy@sha256:f832bbe1d48c62de040bd793937eaa0c05d2f945a55376a99c80a4dd9961aeb1],Size_:53879466,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,RepoTags:[],RepoDigests:[docker.io
/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89],Size_:417012800,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,RepoTags:[],RepoDigests:[docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef docker.io/rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246],Size_:35264960,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,RepoTags:[],RepoDigests:[docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd docker.io/marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624],Size_:205121029,Uid:&Int64Value{Value:10001,},Username:,Spec:nil,Pinned:fal
se,},&Image{Id:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,RepoTags:[],RepoDigests:[ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5 ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c8297508bfb8d5c020acb1dc9eb75b63759cb473f91668107117201092dd4aca],Size_:162318098,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,RepoTags:[],RepoDigests:[registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:316cd3217236293ba00ab9b5eac4056b15d9ab870b3eeeeb99e0d9139a608aa3],Size_:71112828,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,Pinned:false,},&Image{Id:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7
332fc5ff1e4b20c6b6af68d76925922 registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280],Size_:54632579,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c],Size_:56980232,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8 registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7],Size_:57899101,Ui
d:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0 registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b],Size_:57303140,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c],Size_:21521620,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/hostpath
plugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11 registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5],Size_:37200280,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6 registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0],Size_:19577497,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7 registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8],Size_:60675705,Uid:&Int64Value{Valu
e:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,RepoTags:[],RepoDigests:[registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5],Size_:57410185,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,RepoTags:[],RepoDigests:[registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef registry.k8s.io/ingress-nginx/controller@sha256:cfcddeb96818021113c47ca3db866d083e80550444ed5f24fdc76f66911db270],Size_:325674188,Uid:nil,Username:www-data,Spec:nil,Pinned:false,},&Image{Id:7a12f2aed60be6363388152087a70fffb37f8e9ba549e6c6fad1172e24c71a5d,RepoTags:[],RepoDigests:[gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971daf
bbba5c91eaf882b1528797fb8 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7],Size_:54446270,Uid:&Int64Value{Value:65532,},Username:,Spec:nil,Pinned:false,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,RepoTags:[docker.io/library/nginx:alpine],RepoDigests:[docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a],Size_:53949946,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:3bd49f1f42c46a0def0e20
8037f718f7a122902ffa846fd7c2757e017c1ee29e,RepoTags:[],RepoDigests:[ghcr.io/headlamp-k8s/headlamp@sha256:bbca49510385effd4fe27be07dd12b845f530f6abbbaa06ef35ff7b4ae06cc39 ghcr.io/headlamp-k8s/headlamp@sha256:cdbeb1dff093990ea7f3f58456bdf32dc4a163c9dc76409f2efaa036f8d86713],Size_:247709579,Uid:nil,Username:headlamp,Spec:nil,Pinned:false,},&Image{Id:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,RepoTags:[gcr.io/k8s-minikube/busybox:latest],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b],Size_:1462480,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,RepoTags:[],RepoDigests:[docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e7
9],Size_:4497096,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:1827167fde90df99d9341a27fbce2b445550eb2b18105e03f98102f00c0ec35e,RepoTags:[docker.io/library/busybox:stable],RepoDigests:[docker.io/library/busybox@sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 docker.io/library/busybox@sha256:4b8407fadd8100c61b097d63efe992b2c033e7d371c9117f7a9462fe87e31176],Size_:4670414,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:203ad09fc1566a329c1d2af8d1f219b28fd2c00b69e743bd572b7f662365432d,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc],Size_:196550530,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=7284f68d-ba3e-4475-af3b-31d307b97eb3 name=/runtime.v1.ImageService/ListImages
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.732681010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20128aed-29d5-4a33-8167-d7bbd9707bb4 name=/runtime.v1.RuntimeService/Version
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.732747722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20128aed-29d5-4a33-8167-d7bbd9707bb4 name=/runtime.v1.RuntimeService/Version
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.734672740Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a624c507-8cb0-4690-bb65-4e0d4c9fb775 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.736232222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759434826736208954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598015,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a624c507-8cb0-4690-bb65-4e0d4c9fb775 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.737126245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f96f52c-e42a-4b14-972f-638bd8e00b0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.737225469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f96f52c-e42a-4b14-972f-638bd8e00b0b name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 19:53:46 addons-355008 crio[825]: time="2025-10-02 19:53:46.737622368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af3bd3b0a96995b770acbfbe00eec32e8584dc4cf55e5fc7cd147b350b5c0be9,PodSandboxId:74004d285ab28b303a35b943b9f3306a19cd5b43858e5113114f6420fb310ca4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1759434684332909517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d3520043-753a-461f-bc3a-d85b4271f2da,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5e3cc1da0bd8bcf73ebeddd16de0d7ec017dbf4c514b16c3e6a4900b61ed54f,PodSandboxId:885b30b379ec975f93dc9129d62e2d61742b4782b387c8f1993c418331011fc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1759434664122433762,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db3e6724-fe44-444c-92cc-4c9f950e8e37,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ee3f5162a6de08fa47584ac2789f5bfe7ca2b24adf4e3c006b73fef5567b76a,PodSandboxId:cf0c70bdb024cb52954fe1b3cab08c912cf9703ee42f9e39bf9c238411f355c0,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1759434652734058011,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-76vsl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d34f9dd-caff-417e-8a09-ad90678835d5,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6281fd9d6e01b246b03e03c81d96caa8b6d6e9a750efa153bec3894ba606ab6d,PodSandboxId:eec4a0315671c3e6260896981e48ce0ba0261963320d54764a50a0e928a19ac0,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da673
4db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759434580572565163,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-pkrgf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c23fdd12-b43b-40d9-bf25-7ad78e474a66,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e5701c76a93f328332356a269778ebd400d956ca26da10c4b59dc9cece9d4c,PodSandboxId:10c83e5bcca43d16c3e4f7b9a148a8584b691431c94e372ad3f1f6a64e51b950,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1759434578184674104,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-kdtl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 80b479c8-8fe5-41db-b6e9-a92b005be31e,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081fc89c97049e01c2470036e5def68aaef7183773712886701bf25bbe87ba0f,PodSandboxId:2acdb4da480e9c83da414eadc7a99fb6a3a50a05abfb0279e719eda32e035a19,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9660a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1759434576130528211,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-l8bhx,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: fb92b572-893f-40b4-a330-fd17d21d0ff0,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb5868518a595ebbcbe0aa5c6da2f35dca166161d8a2ce7b247663ff0226580b,PodSandboxId:f6597a747a3c8c885673082884eda507936d8ae21888825e921d7544e3368d93,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1759434560941619288,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09a37f97-906f-4f6a-9828-e480976a10fc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e046f298f6ec5d55be885f6210b2481e2150343f8a4c28bd830810afce9685e,PodSandboxId:0d58b75ada84342c6fa934ff9a9075825796066ffa21110
2dd0f3328359388b2,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1759434531689844101,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-jmpmw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d71dc057-594c-482e-9d24-56aa5d3609e8,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfa9e439dc127ce9b227b3d6a8c5a3ed00fa0d69a9e1ac74fba13939fc9cf1df,PodSandboxId:494e38e
5fd6b6a7db7be3f2141301866302f27601e58db738d7399d8fa914cc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759434509468621427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1fb9a7-a4b0-4c45-84d0-e8eff5bb983c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a46deea32352d17a0c3b676f0a6a5d88a89ffabc1318a8b1789f7477ff46a6c,PodSandboxId:ae9064556a868f13629
b18c6a8494daa9fc50d7c1bc2363c4f9c04611a1f2c3d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759434501532221177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-554hr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284d6756-53c9-4606-b85a-9a9a034a7f4f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3241debf7f0c9cd369032f7093b6b77e68df0ca98cdbc47a49afa98bff55eb40,PodSandboxId:f3ba8767848422331c1d4d017012cde114e0b49cd4483b26230c79266396adff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759434500828071383,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r78bp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2dc40f2-56d6-47ef-a820-5576f28a1c5c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a62f52f74c7a1cd7c2da70815796003fb22d75f8ef96d32651d4c84400d2d60,PodSandboxId:0ed09d7b62a328451932217aa815ffe29102767571527b5b6cefe41275fc462e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759434488806604933,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddf65b576167ac1bd9ac08492d623a36,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65dd6bebcaa045921a9fe9ea1b6149ae0a647d6fc77c2f0e7fc6863967d8c7e6,PodSandboxId:74964e95d96a3770be98fa15b9627dc5093e003d2f9e3f7050b035b34bf102c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759434488840256177,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb
c9ff65694338e015d5cba6d60a8c3d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9961b87bd3412ca159b61217e60547f4028a4fbe1c3c4f2d95a41375ae44c67b,PodSandboxId:fc1ddd6f3c3cce0dc8d8cf10de50c9a2770a1432f82cfd63f211ddbc47affd65,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759434488778282101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addo
ns-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf829f7ac02b97adb4c6c60623350c08,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc3de843106f2a1d51737d2b5073d398032e3c4ea587ef986a7c0a9ec399354,PodSandboxId:f5fc9cb627abbf81f6b7ab94f41af7b1fc80c11d4515cff09294ebb67d77332e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759434488786287274,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-355008,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ceafbaa62e14166b54391f6300c58550,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f96f52c-e42a-4b14-972f-638bd8e00b0b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	af3bd3b0a9699       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   74004d285ab28       nginx
	d5e3cc1da0bd8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   885b30b379ec9       busybox
	5ee3f5162a6de       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             2 minutes ago       Running             controller                0                   cf0c70bdb024c       ingress-nginx-controller-9cc49f96f-76vsl
	6281fd9d6e01b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              patch                     0                   eec4a0315671c       ingress-nginx-admission-patch-pkrgf
	11e5701c76a93       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   10c83e5bcca43       ingress-nginx-admission-create-kdtl5
	081fc89c97049       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago       Running             gadget                    0                   2acdb4da480e9       gadget-l8bhx
	cb5868518a595       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   f6597a747a3c8       kube-ingress-dns-minikube
	2e046f298f6ec       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   0d58b75ada843       amd-gpu-device-plugin-jmpmw
	dfa9e439dc127       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   494e38e5fd6b6       storage-provisioner
	0a46deea32352       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   ae9064556a868       coredns-66bc5c9577-554hr
	3241debf7f0c9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   f3ba876784842       kube-proxy-r78bp
	65dd6bebcaa04       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   74964e95d96a3       kube-controller-manager-addons-355008
	5a62f52f74c7a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   0ed09d7b62a32       kube-scheduler-addons-355008
	afc3de843106f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   f5fc9cb627abb       kube-apiserver-addons-355008
	9961b87bd3412       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   fc1ddd6f3c3cc       etcd-addons-355008
	
	
	==> coredns [0a46deea32352d17a0c3b676f0a6a5d88a89ffabc1318a8b1789f7477ff46a6c] <==
	[INFO] 10.244.0.8:37276 - 13423 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000454179s
	[INFO] 10.244.0.8:37276 - 45927 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000136485s
	[INFO] 10.244.0.8:37276 - 43436 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000465646s
	[INFO] 10.244.0.8:37276 - 10299 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000391357s
	[INFO] 10.244.0.8:37276 - 59866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000387725s
	[INFO] 10.244.0.8:37276 - 1353 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000580915s
	[INFO] 10.244.0.8:37276 - 26031 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.001206846s
	[INFO] 10.244.0.8:33528 - 8938 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000189699s
	[INFO] 10.244.0.8:33528 - 9213 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000150279s
	[INFO] 10.244.0.8:47441 - 28749 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000160886s
	[INFO] 10.244.0.8:47441 - 28452 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000429144s
	[INFO] 10.244.0.8:38919 - 24954 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072598s
	[INFO] 10.244.0.8:38919 - 24687 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000271118s
	[INFO] 10.244.0.8:60078 - 13633 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00019938s
	[INFO] 10.244.0.8:60078 - 13439 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000357767s
	[INFO] 10.244.0.23:46652 - 47558 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000518644s
	[INFO] 10.244.0.23:46556 - 19858 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000210712s
	[INFO] 10.244.0.23:59137 - 4542 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012545s
	[INFO] 10.244.0.23:49087 - 64153 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000157688s
	[INFO] 10.244.0.23:36328 - 62115 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099585s
	[INFO] 10.244.0.23:51522 - 23335 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106128s
	[INFO] 10.244.0.23:40763 - 45520 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004018698s
	[INFO] 10.244.0.23:57862 - 22795 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.004572968s
	[INFO] 10.244.0.27:50610 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000376416s
	[INFO] 10.244.0.27:46501 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00024618s
	
	
	==> describe nodes <==
	Name:               addons-355008
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-355008
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=addons-355008
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T19_48_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-355008
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 19:48:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-355008
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 19:53:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 19:52:50 +0000   Thu, 02 Oct 2025 19:48:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 19:52:50 +0000   Thu, 02 Oct 2025 19:48:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 19:52:50 +0000   Thu, 02 Oct 2025 19:48:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 19:52:50 +0000   Thu, 02 Oct 2025 19:48:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    addons-355008
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c0605158a3a4072af308d4377b938cc
	  System UUID:                5c060515-8a3a-4072-af30-8d4377b938cc
	  Boot ID:                    ddf342d6-a00f-4a92-bd7e-57db21f73729
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  default                     hello-world-app-5d498dc89-wgl6d             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-l8bhx                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-76vsl    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m18s
	  kube-system                 amd-gpu-device-plugin-jmpmw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 coredns-66bc5c9577-554hr                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m28s
	  kube-system                 etcd-addons-355008                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m35s
	  kube-system                 kube-apiserver-addons-355008                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-addons-355008       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-r78bp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-addons-355008                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m24s                  kube-proxy       
	  Normal  Starting                 5m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node addons-355008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node addons-355008 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x7 over 5m40s)  kubelet          Node addons-355008 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m33s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m33s                  kubelet          Node addons-355008 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s                  kubelet          Node addons-355008 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s                  kubelet          Node addons-355008 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m32s                  kubelet          Node addons-355008 status is now: NodeReady
	  Normal  RegisteredNode           5m29s                  node-controller  Node addons-355008 event: Registered Node addons-355008 in Controller
	
	
	==> dmesg <==
	[Oct 2 19:49] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.660486] kauditd_printk_skb: 38 callbacks suppressed
	[ +10.602230] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.994977] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.193948] kauditd_printk_skb: 11 callbacks suppressed
	[  +1.994486] kauditd_printk_skb: 85 callbacks suppressed
	[  +6.074255] kauditd_printk_skb: 91 callbacks suppressed
	[  +5.095707] kauditd_printk_skb: 90 callbacks suppressed
	[Oct 2 19:50] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000058] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.026468] kauditd_printk_skb: 53 callbacks suppressed
	[Oct 2 19:51] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.498268] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.630609] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.440900] kauditd_printk_skb: 89 callbacks suppressed
	[  +1.812946] kauditd_printk_skb: 58 callbacks suppressed
	[  +3.625109] kauditd_printk_skb: 52 callbacks suppressed
	[  +2.593186] kauditd_printk_skb: 127 callbacks suppressed
	[  +2.725872] kauditd_printk_skb: 81 callbacks suppressed
	[  +1.000029] kauditd_printk_skb: 126 callbacks suppressed
	[Oct 2 19:52] kauditd_printk_skb: 26 callbacks suppressed
	[  +4.563446] kauditd_printk_skb: 25 callbacks suppressed
	[  +0.000287] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.850699] kauditd_printk_skb: 41 callbacks suppressed
	[Oct 2 19:53] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [9961b87bd3412ca159b61217e60547f4028a4fbe1c3c4f2d95a41375ae44c67b] <==
	{"level":"warn","ts":"2025-10-02T19:49:51.419034Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.661531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T19:49:51.419075Z","caller":"traceutil/trace.go:172","msg":"trace[1289653243] range","detail":"{range_begin:/registry/endpointslices; range_end:; response_count:0; response_revision:1157; }","duration":"123.73554ms","start":"2025-10-02T19:49:51.295332Z","end":"2025-10-02T19:49:51.419068Z","steps":["trace[1289653243] 'agreement among raft nodes before linearized reading'  (duration: 123.642563ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T19:49:51.419290Z","caller":"traceutil/trace.go:172","msg":"trace[2131936477] transaction","detail":"{read_only:false; response_revision:1157; number_of_response:1; }","duration":"225.465258ms","start":"2025-10-02T19:49:51.193816Z","end":"2025-10-02T19:49:51.419281Z","steps":["trace[2131936477] 'process raft request'  (duration: 190.105576ms)","trace[2131936477] 'compare'  (duration: 34.652835ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T19:50:27.863325Z","caller":"traceutil/trace.go:172","msg":"trace[1901319977] linearizableReadLoop","detail":"{readStateIndex:1273; appliedIndex:1273; }","duration":"225.531881ms","start":"2025-10-02T19:50:27.637769Z","end":"2025-10-02T19:50:27.863301Z","steps":["trace[1901319977] 'read index received'  (duration: 225.527024ms)","trace[1901319977] 'applied index is now lower than readState.Index'  (duration: 4.258µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T19:50:27.863471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"225.688898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T19:50:27.863537Z","caller":"traceutil/trace.go:172","msg":"trace[1078565543] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1230; }","duration":"225.783116ms","start":"2025-10-02T19:50:27.637746Z","end":"2025-10-02T19:50:27.863529Z","steps":["trace[1078565543] 'agreement among raft nodes before linearized reading'  (duration: 225.662101ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T19:50:27.863819Z","caller":"traceutil/trace.go:172","msg":"trace[263834477] transaction","detail":"{read_only:false; response_revision:1231; number_of_response:1; }","duration":"230.091417ms","start":"2025-10-02T19:50:27.633720Z","end":"2025-10-02T19:50:27.863811Z","steps":["trace[263834477] 'process raft request'  (duration: 229.988566ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T19:50:51.213219Z","caller":"traceutil/trace.go:172","msg":"trace[2019891612] linearizableReadLoop","detail":"{readStateIndex:1310; appliedIndex:1310; }","duration":"305.590934ms","start":"2025-10-02T19:50:50.907612Z","end":"2025-10-02T19:50:51.213203Z","steps":["trace[2019891612] 'read index received'  (duration: 305.586288ms)","trace[2019891612] 'applied index is now lower than readState.Index'  (duration: 4.226µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T19:50:51.213357Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"305.719325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T19:50:51.213375Z","caller":"traceutil/trace.go:172","msg":"trace[1875842460] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1261; }","duration":"305.763565ms","start":"2025-10-02T19:50:50.907606Z","end":"2025-10-02T19:50:51.213370Z","steps":["trace[1875842460] 'agreement among raft nodes before linearized reading'  (duration: 305.694408ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T19:50:51.213397Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T19:50:50.907588Z","time spent":"305.802038ms","remote":"127.0.0.1:39958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-10-02T19:50:51.213692Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"282.77942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T19:50:51.213726Z","caller":"traceutil/trace.go:172","msg":"trace[589439207] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1262; }","duration":"282.818254ms","start":"2025-10-02T19:50:50.930900Z","end":"2025-10-02T19:50:51.213718Z","steps":["trace[589439207] 'agreement among raft nodes before linearized reading'  (duration: 282.764514ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T19:50:51.213782Z","caller":"traceutil/trace.go:172","msg":"trace[948208532] transaction","detail":"{read_only:false; response_revision:1262; number_of_response:1; }","duration":"379.389321ms","start":"2025-10-02T19:50:50.834384Z","end":"2025-10-02T19:50:51.213773Z","steps":["trace[948208532] 'process raft request'  (duration: 379.190042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T19:50:51.213930Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-02T19:50:50.834366Z","time spent":"379.439376ms","remote":"127.0.0.1:40432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1254 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2025-10-02T19:51:02.062841Z","caller":"traceutil/trace.go:172","msg":"trace[297907694] transaction","detail":"{read_only:false; response_revision:1316; number_of_response:1; }","duration":"143.830046ms","start":"2025-10-02T19:51:01.918995Z","end":"2025-10-02T19:51:02.062825Z","steps":["trace[297907694] 'process raft request'  (duration: 143.420415ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T19:51:26.769973Z","caller":"traceutil/trace.go:172","msg":"trace[120047356] transaction","detail":"{read_only:false; response_revision:1489; number_of_response:1; }","duration":"105.176674ms","start":"2025-10-02T19:51:26.664784Z","end":"2025-10-02T19:51:26.769961Z","steps":["trace[120047356] 'process raft request'  (duration: 105.071332ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T19:51:28.512406Z","caller":"traceutil/trace.go:172","msg":"trace[224996412] linearizableReadLoop","detail":"{readStateIndex:1547; appliedIndex:1547; }","duration":"179.634712ms","start":"2025-10-02T19:51:28.332752Z","end":"2025-10-02T19:51:28.512387Z","steps":["trace[224996412] 'read index received'  (duration: 179.629304ms)","trace[224996412] 'applied index is now lower than readState.Index'  (duration: 4.751µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-02T19:51:28.512567Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.788154ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-02T19:51:28.512588Z","caller":"traceutil/trace.go:172","msg":"trace[758787958] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1490; }","duration":"179.856997ms","start":"2025-10-02T19:51:28.332725Z","end":"2025-10-02T19:51:28.512582Z","steps":["trace[758787958] 'agreement among raft nodes before linearized reading'  (duration: 179.737912ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T19:51:28.512898Z","caller":"traceutil/trace.go:172","msg":"trace[1925908342] transaction","detail":"{read_only:false; response_revision:1491; number_of_response:1; }","duration":"219.215611ms","start":"2025-10-02T19:51:28.293673Z","end":"2025-10-02T19:51:28.512889Z","steps":["trace[1925908342] 'process raft request'  (duration: 219.101627ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T19:51:56.585695Z","caller":"traceutil/trace.go:172","msg":"trace[1449575607] linearizableReadLoop","detail":"{readStateIndex:1790; appliedIndex:1790; }","duration":"128.317153ms","start":"2025-10-02T19:51:56.457349Z","end":"2025-10-02T19:51:56.585666Z","steps":["trace[1449575607] 'read index received'  (duration: 128.309214ms)","trace[1449575607] 'applied index is now lower than readState.Index'  (duration: 6.575µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-02T19:51:56.585811Z","caller":"traceutil/trace.go:172","msg":"trace[1688658198] transaction","detail":"{read_only:false; response_revision:1719; number_of_response:1; }","duration":"180.844283ms","start":"2025-10-02T19:51:56.404956Z","end":"2025-10-02T19:51:56.585800Z","steps":["trace[1688658198] 'process raft request'  (duration: 180.746257ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T19:51:56.585938Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.579954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:1 size:636"}
	{"level":"info","ts":"2025-10-02T19:51:56.585964Z","caller":"traceutil/trace.go:172","msg":"trace[1280779570] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1719; }","duration":"128.640252ms","start":"2025-10-02T19:51:56.457318Z","end":"2025-10-02T19:51:56.585958Z","steps":["trace[1280779570] 'agreement among raft nodes before linearized reading'  (duration: 128.463101ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:53:47 up 6 min,  0 users,  load average: 1.81, 1.48, 0.77
	Linux addons-355008 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [afc3de843106f2a1d51737d2b5073d398032e3c4ea587ef986a7c0a9ec399354] <==
	E1002 19:49:03.382978       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.152.81:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.152.81:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.152.81:443: connect: connection refused" logger="UnhandledError"
	E1002 19:49:03.404411       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.152.81:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.152.81:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.152.81:443: connect: connection refused" logger="UnhandledError"
	I1002 19:49:03.502821       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 19:51:10.389415       1 conn.go:339] Error on socket receive: read tcp 192.168.39.211:8443->192.168.39.1:47676: use of closed network connection
	E1002 19:51:10.582239       1 conn.go:339] Error on socket receive: read tcp 192.168.39.211:8443->192.168.39.1:58216: use of closed network connection
	I1002 19:51:19.308680       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 19:51:19.521526       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.122.109"}
	I1002 19:51:20.096706       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.119.13"}
	I1002 19:52:04.409775       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1002 19:52:06.797398       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1002 19:52:24.916074       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1002 19:52:54.141341       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 19:52:54.141448       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 19:52:54.173461       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 19:52:54.173859       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 19:52:54.187132       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 19:52:54.187304       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 19:52:54.215871       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 19:52:54.215938       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 19:52:54.248015       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 19:52:54.248039       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 19:52:55.188253       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1002 19:52:55.248210       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1002 19:52:55.354793       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1002 19:53:45.377591       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.107.205"}
	
	
	==> kube-controller-manager [65dd6bebcaa045921a9fe9ea1b6149ae0a647d6fc77c2f0e7fc6863967d8c7e6] <==
	E1002 19:52:58.880132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 19:52:59.382166       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:52:59.383425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 19:53:02.823871       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:53:02.825886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 19:53:04.186032       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:53:04.187242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 19:53:05.226121       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:53:05.227119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 19:53:11.169148       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:53:11.170288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 19:53:14.627601       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:53:14.628805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 19:53:17.627103       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:53:17.628202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1002 19:53:18.921938       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 19:53:18.922044       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 19:53:19.025537       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 19:53:19.025731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 19:53:29.085983       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:53:29.087194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 19:53:35.258367       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:53:35.260358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 19:53:38.482610       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 19:53:38.483542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [3241debf7f0c9cd369032f7093b6b77e68df0ca98cdbc47a49afa98bff55eb40] <==
	I1002 19:48:21.651770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 19:48:21.755107       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 19:48:21.755223       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.211"]
	E1002 19:48:21.755315       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 19:48:22.090901       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 19:48:22.090969       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 19:48:22.090997       1 server_linux.go:132] "Using iptables Proxier"
	I1002 19:48:22.118201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 19:48:22.118582       1 server.go:527] "Version info" version="v1.34.1"
	I1002 19:48:22.118596       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:48:22.126573       1 config.go:200] "Starting service config controller"
	I1002 19:48:22.129314       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 19:48:22.127590       1 config.go:106] "Starting endpoint slice config controller"
	I1002 19:48:22.129370       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 19:48:22.127604       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 19:48:22.129381       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 19:48:22.128315       1 config.go:309] "Starting node config controller"
	I1002 19:48:22.129389       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 19:48:22.129393       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 19:48:22.229558       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 19:48:22.229585       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 19:48:22.229608       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5a62f52f74c7a1cd7c2da70815796003fb22d75f8ef96d32651d4c84400d2d60] <==
	E1002 19:48:11.799230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 19:48:11.799340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 19:48:11.799374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 19:48:11.799411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 19:48:11.799599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 19:48:11.799669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 19:48:11.799707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 19:48:11.799733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 19:48:11.799764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 19:48:12.636197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 19:48:12.667458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 19:48:12.765901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 19:48:12.786034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 19:48:12.786668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 19:48:12.802900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 19:48:12.854726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 19:48:12.875884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 19:48:12.900697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 19:48:12.948585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 19:48:13.016932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 19:48:13.030190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 19:48:13.103800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 19:48:13.149228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 19:48:13.188136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1002 19:48:14.788145       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.072381    1500 scope.go:117] "RemoveContainer" containerID="f005999f1bcb4aed2210dc26596e5d45e5c162097e785487bf31a2ca30f462c4"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.073137    1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f005999f1bcb4aed2210dc26596e5d45e5c162097e785487bf31a2ca30f462c4"} err="failed to get container status \"f005999f1bcb4aed2210dc26596e5d45e5c162097e785487bf31a2ca30f462c4\": rpc error: code = NotFound desc = could not find container \"f005999f1bcb4aed2210dc26596e5d45e5c162097e785487bf31a2ca30f462c4\": container with ID starting with f005999f1bcb4aed2210dc26596e5d45e5c162097e785487bf31a2ca30f462c4 not found: ID does not exist"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.073155    1500 scope.go:117] "RemoveContainer" containerID="e29cda6badcb61a12625b42b04d0d2c3d62605b1fb1055ad3b9466fb5f09e37c"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.073657    1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29cda6badcb61a12625b42b04d0d2c3d62605b1fb1055ad3b9466fb5f09e37c"} err="failed to get container status \"e29cda6badcb61a12625b42b04d0d2c3d62605b1fb1055ad3b9466fb5f09e37c\": rpc error: code = NotFound desc = could not find container \"e29cda6badcb61a12625b42b04d0d2c3d62605b1fb1055ad3b9466fb5f09e37c\": container with ID starting with e29cda6badcb61a12625b42b04d0d2c3d62605b1fb1055ad3b9466fb5f09e37c not found: ID does not exist"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.073694    1500 scope.go:117] "RemoveContainer" containerID="38f096cf90ffb5287cf80aec3402cbc17708cae0b4f015681e5d08c5b060d216"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.188757    1500 scope.go:117] "RemoveContainer" containerID="38f096cf90ffb5287cf80aec3402cbc17708cae0b4f015681e5d08c5b060d216"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: E1002 19:52:57.189376    1500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f096cf90ffb5287cf80aec3402cbc17708cae0b4f015681e5d08c5b060d216\": container with ID starting with 38f096cf90ffb5287cf80aec3402cbc17708cae0b4f015681e5d08c5b060d216 not found: ID does not exist" containerID="38f096cf90ffb5287cf80aec3402cbc17708cae0b4f015681e5d08c5b060d216"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.189431    1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f096cf90ffb5287cf80aec3402cbc17708cae0b4f015681e5d08c5b060d216"} err="failed to get container status \"38f096cf90ffb5287cf80aec3402cbc17708cae0b4f015681e5d08c5b060d216\": rpc error: code = NotFound desc = could not find container \"38f096cf90ffb5287cf80aec3402cbc17708cae0b4f015681e5d08c5b060d216\": container with ID starting with 38f096cf90ffb5287cf80aec3402cbc17708cae0b4f015681e5d08c5b060d216 not found: ID does not exist"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.189455    1500 scope.go:117] "RemoveContainer" containerID="3265269c1e46a5c5567ed4bf65e2cd2ed9b071e2d1166b1344f3db5068e692c1"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.309596    1500 scope.go:117] "RemoveContainer" containerID="3265269c1e46a5c5567ed4bf65e2cd2ed9b071e2d1166b1344f3db5068e692c1"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: E1002 19:52:57.311091    1500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3265269c1e46a5c5567ed4bf65e2cd2ed9b071e2d1166b1344f3db5068e692c1\": container with ID starting with 3265269c1e46a5c5567ed4bf65e2cd2ed9b071e2d1166b1344f3db5068e692c1 not found: ID does not exist" containerID="3265269c1e46a5c5567ed4bf65e2cd2ed9b071e2d1166b1344f3db5068e692c1"
	Oct 02 19:52:57 addons-355008 kubelet[1500]: I1002 19:52:57.311121    1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3265269c1e46a5c5567ed4bf65e2cd2ed9b071e2d1166b1344f3db5068e692c1"} err="failed to get container status \"3265269c1e46a5c5567ed4bf65e2cd2ed9b071e2d1166b1344f3db5068e692c1\": rpc error: code = NotFound desc = could not find container \"3265269c1e46a5c5567ed4bf65e2cd2ed9b071e2d1166b1344f3db5068e692c1\": container with ID starting with 3265269c1e46a5c5567ed4bf65e2cd2ed9b071e2d1166b1344f3db5068e692c1 not found: ID does not exist"
	Oct 02 19:53:00 addons-355008 kubelet[1500]: I1002 19:53:00.781185    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-jmpmw" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 19:53:05 addons-355008 kubelet[1500]: E1002 19:53:05.251402    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759434785250927115  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:05 addons-355008 kubelet[1500]: E1002 19:53:05.251429    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759434785250927115  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:15 addons-355008 kubelet[1500]: E1002 19:53:15.254639    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759434795254194183  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:15 addons-355008 kubelet[1500]: E1002 19:53:15.254692    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759434795254194183  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:25 addons-355008 kubelet[1500]: E1002 19:53:25.257673    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759434805257200330  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:25 addons-355008 kubelet[1500]: E1002 19:53:25.257770    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759434805257200330  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:35 addons-355008 kubelet[1500]: E1002 19:53:35.262716    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759434815262272677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:35 addons-355008 kubelet[1500]: E1002 19:53:35.262761    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759434815262272677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:37 addons-355008 kubelet[1500]: I1002 19:53:37.780277    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 19:53:45 addons-355008 kubelet[1500]: E1002 19:53:45.266647    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759434825265283059  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:45 addons-355008 kubelet[1500]: E1002 19:53:45.266691    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759434825265283059  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598015}  inodes_used:{value:201}}"
	Oct 02 19:53:45 addons-355008 kubelet[1500]: I1002 19:53:45.352415    1500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4vxq\" (UniqueName: \"kubernetes.io/projected/19b05e4f-c13e-4bfb-b97c-abcee343ea64-kube-api-access-k4vxq\") pod \"hello-world-app-5d498dc89-wgl6d\" (UID: \"19b05e4f-c13e-4bfb-b97c-abcee343ea64\") " pod="default/hello-world-app-5d498dc89-wgl6d"
	
	
	==> storage-provisioner [dfa9e439dc127ce9b227b3d6a8c5a3ed00fa0d69a9e1ac74fba13939fc9cf1df] <==
	W1002 19:53:21.452593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:23.456397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:23.462150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:25.467019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:25.476590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:27.480021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:27.486138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:29.489784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:29.497105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:31.500656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:31.506271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:33.509160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:33.516445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:35.520855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:35.529095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:37.533950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:37.542540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:39.547000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:39.554662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:41.561693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:41.570165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:43.573426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:43.578624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:45.586583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 19:53:45.592634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-355008 -n addons-355008
helpers_test.go:269: (dbg) Run:  kubectl --context addons-355008 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-wgl6d ingress-nginx-admission-create-kdtl5 ingress-nginx-admission-patch-pkrgf
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-355008 describe pod hello-world-app-5d498dc89-wgl6d ingress-nginx-admission-create-kdtl5 ingress-nginx-admission-patch-pkrgf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-355008 describe pod hello-world-app-5d498dc89-wgl6d ingress-nginx-admission-create-kdtl5 ingress-nginx-admission-patch-pkrgf: exit status 1 (73.801305ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-wgl6d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-355008/192.168.39.211
	Start Time:       Thu, 02 Oct 2025 19:53:45 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k4vxq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k4vxq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-wgl6d to addons-355008
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kdtl5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pkrgf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-355008 describe pod hello-world-app-5d498dc89-wgl6d ingress-nginx-admission-create-kdtl5 ingress-nginx-admission-patch-pkrgf: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-355008 addons disable ingress-dns --alsologtostderr -v=1: (1.641129165s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-355008 addons disable ingress --alsologtostderr -v=1: (7.791480225s)
--- FAIL: TestAddons/parallel/Ingress (158.47s)

                                                
                                    
x
+
TestPreload (174.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-586629 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-586629 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m37.942491991s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-586629 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-586629 image pull gcr.io/k8s-minikube/busybox: (3.42779371s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-586629
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-586629: (7.083264313s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-586629 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:41:00.270048   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-586629 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.950434905s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-586629 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-02 20:41:29.397631185 +0000 UTC m=+3277.675544559
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-586629 -n test-preload-586629
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-586629 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-586629 logs -n 25: (1.107521046s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-273324 ssh -n multinode-273324-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ ssh     │ multinode-273324 ssh -n multinode-273324 sudo cat /home/docker/cp-test_multinode-273324-m03_multinode-273324.txt                                                                    │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ cp      │ multinode-273324 cp multinode-273324-m03:/home/docker/cp-test.txt multinode-273324-m02:/home/docker/cp-test_multinode-273324-m03_multinode-273324-m02.txt                           │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ ssh     │ multinode-273324 ssh -n multinode-273324-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ ssh     │ multinode-273324 ssh -n multinode-273324-m02 sudo cat /home/docker/cp-test_multinode-273324-m03_multinode-273324-m02.txt                                                            │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ node    │ multinode-273324 node stop m03                                                                                                                                                      │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ node    │ multinode-273324 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:27 UTC │
	│ node    │ list -p multinode-273324                                                                                                                                                            │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │                     │
	│ stop    │ -p multinode-273324                                                                                                                                                                 │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:27 UTC │ 02 Oct 25 20:30 UTC │
	│ start   │ -p multinode-273324 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:32 UTC │
	│ node    │ list -p multinode-273324                                                                                                                                                            │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:32 UTC │                     │
	│ node    │ multinode-273324 node delete m03                                                                                                                                                    │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:32 UTC │ 02 Oct 25 20:33 UTC │
	│ stop    │ multinode-273324 stop                                                                                                                                                               │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:33 UTC │ 02 Oct 25 20:35 UTC │
	│ start   │ -p multinode-273324 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:35 UTC │ 02 Oct 25 20:37 UTC │
	│ node    │ list -p multinode-273324                                                                                                                                                            │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:37 UTC │                     │
	│ start   │ -p multinode-273324-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-273324-m02 │ jenkins │ v1.37.0 │ 02 Oct 25 20:37 UTC │                     │
	│ start   │ -p multinode-273324-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-273324-m03 │ jenkins │ v1.37.0 │ 02 Oct 25 20:37 UTC │ 02 Oct 25 20:38 UTC │
	│ node    │ add -p multinode-273324                                                                                                                                                             │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │                     │
	│ delete  │ -p multinode-273324-m03                                                                                                                                                             │ multinode-273324-m03 │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ delete  │ -p multinode-273324                                                                                                                                                                 │ multinode-273324     │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:38 UTC │
	│ start   │ -p test-preload-586629 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-586629  │ jenkins │ v1.37.0 │ 02 Oct 25 20:38 UTC │ 02 Oct 25 20:40 UTC │
	│ image   │ test-preload-586629 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-586629  │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ -p test-preload-586629                                                                                                                                                              │ test-preload-586629  │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ start   │ -p test-preload-586629 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-586629  │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:41 UTC │
	│ image   │ test-preload-586629 image list                                                                                                                                                      │ test-preload-586629  │ jenkins │ v1.37.0 │ 02 Oct 25 20:41 UTC │ 02 Oct 25 20:41 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:40:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:40:26.258134   44572 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:40:26.258410   44572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:40:26.258420   44572 out.go:374] Setting ErrFile to fd 2...
	I1002 20:40:26.258424   44572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:40:26.258644   44572 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 20:40:26.259148   44572 out.go:368] Setting JSON to false
	I1002 20:40:26.260128   44572 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4969,"bootTime":1759432657,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:40:26.260225   44572 start.go:140] virtualization: kvm guest
	I1002 20:40:26.262294   44572 out.go:179] * [test-preload-586629] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:40:26.263435   44572 notify.go:221] Checking for updates...
	I1002 20:40:26.263464   44572 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:40:26.264671   44572 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:40:26.265856   44572 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 20:40:26.266939   44572 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 20:40:26.267896   44572 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:40:26.268984   44572 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:40:26.270245   44572 config.go:182] Loaded profile config "test-preload-586629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 20:40:26.270825   44572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:40:26.270870   44572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:40:26.289288   44572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I1002 20:40:26.289776   44572 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:40:26.290271   44572 main.go:141] libmachine: Using API Version  1
	I1002 20:40:26.290320   44572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:40:26.290758   44572 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:40:26.290938   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:40:26.292700   44572 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1002 20:40:26.293709   44572 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:40:26.294082   44572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:40:26.294151   44572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:40:26.307148   44572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I1002 20:40:26.307565   44572 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:40:26.307994   44572 main.go:141] libmachine: Using API Version  1
	I1002 20:40:26.308016   44572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:40:26.308372   44572 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:40:26.308540   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:40:26.342869   44572 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 20:40:26.343832   44572 start.go:306] selected driver: kvm2
	I1002 20:40:26.343848   44572 start.go:936] validating driver "kvm2" against &{Name:test-preload-586629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-586629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:40:26.343978   44572 start.go:947] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:40:26.344776   44572 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:40:26.344856   44572 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:40:26.358992   44572 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:40:26.359031   44572 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:40:26.373170   44572 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:40:26.373559   44572 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:40:26.373587   44572 cni.go:84] Creating CNI manager for ""
	I1002 20:40:26.373631   44572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:40:26.373681   44572 start.go:350] cluster config:
	{Name:test-preload-586629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-586629 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:40:26.373796   44572 iso.go:125] acquiring lock: {Name:mkabc2fb4ac96edf87725f05149cf44e9a15d593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:40:26.375403   44572 out.go:179] * Starting "test-preload-586629" primary control-plane node in "test-preload-586629" cluster
	I1002 20:40:26.376447   44572 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 20:40:26.486426   44572 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1002 20:40:26.486456   44572 cache.go:59] Caching tarball of preloaded images
	I1002 20:40:26.486628   44572 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 20:40:26.488282   44572 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1002 20:40:26.489241   44572 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 20:40:26.599928   44572 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1002 20:40:26.599975   44572 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1002 20:40:43.558797   44572 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1002 20:40:43.558957   44572 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/config.json ...
	I1002 20:40:43.559219   44572 start.go:361] acquireMachinesLock for test-preload-586629: {Name:mk83006c688982612686a8dbdd0b9c4ecd5d338c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 20:40:43.559290   44572 start.go:365] duration metric: took 46.356µs to acquireMachinesLock for "test-preload-586629"
	I1002 20:40:43.559308   44572 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:40:43.559320   44572 fix.go:55] fixHost starting: 
	I1002 20:40:43.559593   44572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:40:43.559630   44572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:40:43.572945   44572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43587
	I1002 20:40:43.573383   44572 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:40:43.573792   44572 main.go:141] libmachine: Using API Version  1
	I1002 20:40:43.573817   44572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:40:43.574181   44572 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:40:43.574367   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:40:43.574527   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetState
	I1002 20:40:43.576274   44572 fix.go:113] recreateIfNeeded on test-preload-586629: state=Stopped err=<nil>
	I1002 20:40:43.576313   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	W1002 20:40:43.576454   44572 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:40:43.578524   44572 out.go:252] * Restarting existing kvm2 VM for "test-preload-586629" ...
	I1002 20:40:43.578550   44572 main.go:141] libmachine: (test-preload-586629) Calling .Start
	I1002 20:40:43.578716   44572 main.go:141] libmachine: (test-preload-586629) starting domain...
	I1002 20:40:43.578754   44572 main.go:141] libmachine: (test-preload-586629) ensuring networks are active...
	I1002 20:40:43.579524   44572 main.go:141] libmachine: (test-preload-586629) Ensuring network default is active
	I1002 20:40:43.579913   44572 main.go:141] libmachine: (test-preload-586629) Ensuring network mk-test-preload-586629 is active
	I1002 20:40:43.580356   44572 main.go:141] libmachine: (test-preload-586629) getting domain XML...
	I1002 20:40:43.581357   44572 main.go:141] libmachine: (test-preload-586629) DBG | starting domain XML:
	I1002 20:40:43.581372   44572 main.go:141] libmachine: (test-preload-586629) DBG | <domain type='kvm'>
	I1002 20:40:43.581403   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <name>test-preload-586629</name>
	I1002 20:40:43.581430   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <uuid>9273ddd9-9a42-40c5-bd07-ebaac7fbad81</uuid>
	I1002 20:40:43.581446   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <memory unit='KiB'>3145728</memory>
	I1002 20:40:43.581456   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1002 20:40:43.581468   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 20:40:43.581479   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <os>
	I1002 20:40:43.581495   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 20:40:43.581508   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <boot dev='cdrom'/>
	I1002 20:40:43.581521   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <boot dev='hd'/>
	I1002 20:40:43.581532   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <bootmenu enable='no'/>
	I1002 20:40:43.581538   44572 main.go:141] libmachine: (test-preload-586629) DBG |   </os>
	I1002 20:40:43.581545   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <features>
	I1002 20:40:43.581554   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <acpi/>
	I1002 20:40:43.581564   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <apic/>
	I1002 20:40:43.581573   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <pae/>
	I1002 20:40:43.581580   44572 main.go:141] libmachine: (test-preload-586629) DBG |   </features>
	I1002 20:40:43.581590   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 20:40:43.581600   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <clock offset='utc'/>
	I1002 20:40:43.581628   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 20:40:43.581657   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <on_reboot>restart</on_reboot>
	I1002 20:40:43.581674   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <on_crash>destroy</on_crash>
	I1002 20:40:43.581684   44572 main.go:141] libmachine: (test-preload-586629) DBG |   <devices>
	I1002 20:40:43.581697   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 20:40:43.581713   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <disk type='file' device='cdrom'>
	I1002 20:40:43.581743   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <driver name='qemu' type='raw'/>
	I1002 20:40:43.581763   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/boot2docker.iso'/>
	I1002 20:40:43.581778   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 20:40:43.581786   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <readonly/>
	I1002 20:40:43.581799   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 20:40:43.581808   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </disk>
	I1002 20:40:43.581817   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <disk type='file' device='disk'>
	I1002 20:40:43.581825   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 20:40:43.581849   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/test-preload-586629.rawdisk'/>
	I1002 20:40:43.581869   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <target dev='hda' bus='virtio'/>
	I1002 20:40:43.581882   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 20:40:43.581892   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </disk>
	I1002 20:40:43.581905   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 20:40:43.581920   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 20:40:43.581932   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </controller>
	I1002 20:40:43.581945   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 20:40:43.581976   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 20:40:43.582003   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 20:40:43.582017   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </controller>
	I1002 20:40:43.582028   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <interface type='network'>
	I1002 20:40:43.582038   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <mac address='52:54:00:50:e8:e9'/>
	I1002 20:40:43.582055   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <source network='mk-test-preload-586629'/>
	I1002 20:40:43.582077   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <model type='virtio'/>
	I1002 20:40:43.582096   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 20:40:43.582107   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </interface>
	I1002 20:40:43.582116   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <interface type='network'>
	I1002 20:40:43.582125   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <mac address='52:54:00:9a:c5:29'/>
	I1002 20:40:43.582136   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <source network='default'/>
	I1002 20:40:43.582148   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <model type='virtio'/>
	I1002 20:40:43.582161   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 20:40:43.582171   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </interface>
	I1002 20:40:43.582181   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <serial type='pty'>
	I1002 20:40:43.582196   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <target type='isa-serial' port='0'>
	I1002 20:40:43.582206   44572 main.go:141] libmachine: (test-preload-586629) DBG |         <model name='isa-serial'/>
	I1002 20:40:43.582217   44572 main.go:141] libmachine: (test-preload-586629) DBG |       </target>
	I1002 20:40:43.582230   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </serial>
	I1002 20:40:43.582238   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <console type='pty'>
	I1002 20:40:43.582251   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <target type='serial' port='0'/>
	I1002 20:40:43.582261   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </console>
	I1002 20:40:43.582272   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <input type='mouse' bus='ps2'/>
	I1002 20:40:43.582283   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 20:40:43.582292   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <audio id='1' type='none'/>
	I1002 20:40:43.582305   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <memballoon model='virtio'>
	I1002 20:40:43.582322   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 20:40:43.582339   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </memballoon>
	I1002 20:40:43.582349   44572 main.go:141] libmachine: (test-preload-586629) DBG |     <rng model='virtio'>
	I1002 20:40:43.582355   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <backend model='random'>/dev/random</backend>
	I1002 20:40:43.582370   44572 main.go:141] libmachine: (test-preload-586629) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 20:40:43.582380   44572 main.go:141] libmachine: (test-preload-586629) DBG |     </rng>
	I1002 20:40:43.582389   44572 main.go:141] libmachine: (test-preload-586629) DBG |   </devices>
	I1002 20:40:43.582399   44572 main.go:141] libmachine: (test-preload-586629) DBG | </domain>
	I1002 20:40:43.582414   44572 main.go:141] libmachine: (test-preload-586629) DBG | 
	I1002 20:40:44.841768   44572 main.go:141] libmachine: (test-preload-586629) waiting for domain to start...
	I1002 20:40:44.843164   44572 main.go:141] libmachine: (test-preload-586629) domain is now running
	I1002 20:40:44.843189   44572 main.go:141] libmachine: (test-preload-586629) waiting for IP...
	I1002 20:40:44.843953   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:44.844573   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has current primary IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:44.844592   44572 main.go:141] libmachine: (test-preload-586629) found domain IP: 192.168.39.49
	I1002 20:40:44.844605   44572 main.go:141] libmachine: (test-preload-586629) reserving static IP address...
	I1002 20:40:44.845019   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "test-preload-586629", mac: "52:54:00:50:e8:e9", ip: "192.168.39.49"} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:38:53 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:44.845058   44572 main.go:141] libmachine: (test-preload-586629) DBG | skip adding static IP to network mk-test-preload-586629 - found existing host DHCP lease matching {name: "test-preload-586629", mac: "52:54:00:50:e8:e9", ip: "192.168.39.49"}
	I1002 20:40:44.845081   44572 main.go:141] libmachine: (test-preload-586629) reserved static IP address 192.168.39.49 for domain test-preload-586629
	I1002 20:40:44.845102   44572 main.go:141] libmachine: (test-preload-586629) waiting for SSH...
	I1002 20:40:44.845119   44572 main.go:141] libmachine: (test-preload-586629) DBG | Getting to WaitForSSH function...
	I1002 20:40:44.847476   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:44.847840   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:38:53 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:44.847871   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:44.848031   44572 main.go:141] libmachine: (test-preload-586629) DBG | Using SSH client type: external
	I1002 20:40:44.848059   44572 main.go:141] libmachine: (test-preload-586629) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa (-rw-------)
	I1002 20:40:44.848092   44572 main.go:141] libmachine: (test-preload-586629) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 20:40:44.848109   44572 main.go:141] libmachine: (test-preload-586629) DBG | About to run SSH command:
	I1002 20:40:44.848120   44572 main.go:141] libmachine: (test-preload-586629) DBG | exit 0
	I1002 20:40:56.100186   44572 main.go:141] libmachine: (test-preload-586629) DBG | SSH cmd err, output: exit status 255: 
	I1002 20:40:56.100226   44572 main.go:141] libmachine: (test-preload-586629) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1002 20:40:56.100234   44572 main.go:141] libmachine: (test-preload-586629) DBG | command : exit 0
	I1002 20:40:56.100239   44572 main.go:141] libmachine: (test-preload-586629) DBG | err     : exit status 255
	I1002 20:40:56.100247   44572 main.go:141] libmachine: (test-preload-586629) DBG | output  : 
	I1002 20:40:59.100912   44572 main.go:141] libmachine: (test-preload-586629) DBG | Getting to WaitForSSH function...
	I1002 20:40:59.103845   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.104312   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:59.104365   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.104525   44572 main.go:141] libmachine: (test-preload-586629) DBG | Using SSH client type: external
	I1002 20:40:59.104566   44572 main.go:141] libmachine: (test-preload-586629) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa (-rw-------)
	I1002 20:40:59.104598   44572 main.go:141] libmachine: (test-preload-586629) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 20:40:59.104615   44572 main.go:141] libmachine: (test-preload-586629) DBG | About to run SSH command:
	I1002 20:40:59.104632   44572 main.go:141] libmachine: (test-preload-586629) DBG | exit 0
	I1002 20:40:59.238535   44572 main.go:141] libmachine: (test-preload-586629) DBG | SSH cmd err, output: <nil>: 
	I1002 20:40:59.238925   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetConfigRaw
	I1002 20:40:59.239567   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetIP
	I1002 20:40:59.242338   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.242698   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:59.242740   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.243003   44572 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/config.json ...
	I1002 20:40:59.243236   44572 machine.go:93] provisionDockerMachine start ...
	I1002 20:40:59.243255   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:40:59.243464   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:40:59.245981   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.246347   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:59.246376   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.246543   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:40:59.246710   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:40:59.246850   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:40:59.247010   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:40:59.247143   44572 main.go:141] libmachine: Using SSH client type: native
	I1002 20:40:59.247411   44572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I1002 20:40:59.247424   44572 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:40:59.360492   44572 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1002 20:40:59.360519   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetMachineName
	I1002 20:40:59.360806   44572 buildroot.go:166] provisioning hostname "test-preload-586629"
	I1002 20:40:59.360834   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetMachineName
	I1002 20:40:59.361008   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:40:59.363695   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.364062   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:59.364088   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.364248   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:40:59.364425   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:40:59.364580   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:40:59.364738   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:40:59.364881   44572 main.go:141] libmachine: Using SSH client type: native
	I1002 20:40:59.365101   44572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I1002 20:40:59.365117   44572 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-586629 && echo "test-preload-586629" | sudo tee /etc/hostname
	I1002 20:40:59.500332   44572 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-586629
	
	I1002 20:40:59.500356   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:40:59.503341   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.503719   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:59.503766   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.503936   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:40:59.504155   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:40:59.504311   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:40:59.504446   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:40:59.504612   44572 main.go:141] libmachine: Using SSH client type: native
	I1002 20:40:59.504850   44572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I1002 20:40:59.504868   44572 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-586629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-586629/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-586629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:40:59.632136   44572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:40:59.632173   44572 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9524/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9524/.minikube}
	I1002 20:40:59.632212   44572 buildroot.go:174] setting up certificates
	I1002 20:40:59.632221   44572 provision.go:84] configureAuth start
	I1002 20:40:59.632231   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetMachineName
	I1002 20:40:59.632527   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetIP
	I1002 20:40:59.635560   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.636014   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:59.636072   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.636235   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:40:59.638501   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.638871   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:59.638891   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.639120   44572 provision.go:143] copyHostCerts
	I1002 20:40:59.639177   44572 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem, removing ...
	I1002 20:40:59.639189   44572 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem
	I1002 20:40:59.639252   44572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem (1082 bytes)
	I1002 20:40:59.639369   44572 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem, removing ...
	I1002 20:40:59.639380   44572 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem
	I1002 20:40:59.639407   44572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem (1123 bytes)
	I1002 20:40:59.639461   44572 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem, removing ...
	I1002 20:40:59.639468   44572 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem
	I1002 20:40:59.639490   44572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem (1679 bytes)
	I1002 20:40:59.639539   44572 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem org=jenkins.test-preload-586629 san=[127.0.0.1 192.168.39.49 localhost minikube test-preload-586629]
	I1002 20:40:59.901790   44572 provision.go:177] copyRemoteCerts
	I1002 20:40:59.901861   44572 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:40:59.901886   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:40:59.904705   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.905062   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:40:59.905094   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:40:59.905280   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:40:59.905461   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:40:59.905604   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:40:59.905765   44572 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa Username:docker}
	I1002 20:40:59.994341   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 20:41:00.027746   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:41:00.061098   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:41:00.092826   44572 provision.go:87] duration metric: took 460.588892ms to configureAuth
	I1002 20:41:00.092861   44572 buildroot.go:189] setting minikube options for container-runtime
	I1002 20:41:00.093062   44572 config.go:182] Loaded profile config "test-preload-586629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 20:41:00.093132   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:41:00.096043   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.096378   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:41:00.096419   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.096608   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:41:00.096821   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:41:00.096992   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:41:00.097144   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:41:00.097313   44572 main.go:141] libmachine: Using SSH client type: native
	I1002 20:41:00.097508   44572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I1002 20:41:00.097525   44572 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:41:00.372954   44572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:41:00.372982   44572 machine.go:96] duration metric: took 1.129732723s to provisionDockerMachine
	I1002 20:41:00.372994   44572 start.go:294] postStartSetup for "test-preload-586629" (driver="kvm2")
	I1002 20:41:00.373008   44572 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:41:00.373024   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:41:00.373326   44572 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:41:00.373348   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:41:00.376474   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.376857   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:41:00.376886   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.377105   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:41:00.377305   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:41:00.377459   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:41:00.377603   44572 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa Username:docker}
	I1002 20:41:00.465334   44572 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:41:00.470838   44572 info.go:137] Remote host: Buildroot 2025.02
	I1002 20:41:00.470864   44572 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/addons for local assets ...
	I1002 20:41:00.470935   44572 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/files for local assets ...
	I1002 20:41:00.471005   44572 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem -> 134492.pem in /etc/ssl/certs
	I1002 20:41:00.471094   44572 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:41:00.483714   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem --> /etc/ssl/certs/134492.pem (1708 bytes)
	I1002 20:41:00.515915   44572 start.go:297] duration metric: took 142.904678ms for postStartSetup
	I1002 20:41:00.515957   44572 fix.go:57] duration metric: took 16.956638105s for fixHost
	I1002 20:41:00.515978   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:41:00.518693   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.519070   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:41:00.519101   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.519296   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:41:00.519521   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:41:00.519713   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:41:00.519865   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:41:00.520040   44572 main.go:141] libmachine: Using SSH client type: native
	I1002 20:41:00.520245   44572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I1002 20:41:00.520254   44572 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 20:41:00.632884   44572 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759437660.592894821
	
	I1002 20:41:00.632908   44572 fix.go:217] guest clock: 1759437660.592894821
	I1002 20:41:00.632915   44572 fix.go:230] Guest: 2025-10-02 20:41:00.592894821 +0000 UTC Remote: 2025-10-02 20:41:00.515960629 +0000 UTC m=+34.294958334 (delta=76.934192ms)
	I1002 20:41:00.632954   44572 fix.go:201] guest clock delta is within tolerance: 76.934192ms
	I1002 20:41:00.632961   44572 start.go:84] releasing machines lock for "test-preload-586629", held for 17.073661795s
	I1002 20:41:00.632982   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:41:00.633242   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetIP
	I1002 20:41:00.636168   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.636501   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:41:00.636532   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.636648   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:41:00.637161   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:41:00.637325   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:41:00.637431   44572 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:41:00.637480   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:41:00.637536   44572 ssh_runner.go:195] Run: cat /version.json
	I1002 20:41:00.637560   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:41:00.640660   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.640707   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.641086   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:41:00.641118   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.641149   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:41:00.641167   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:00.641318   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:41:00.641423   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:41:00.641519   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:41:00.641595   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:41:00.641675   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:41:00.641755   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:41:00.641824   44572 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa Username:docker}
	I1002 20:41:00.641879   44572 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa Username:docker}
	I1002 20:41:00.724137   44572 ssh_runner.go:195] Run: systemctl --version
	I1002 20:41:00.762292   44572 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:41:00.911514   44572 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:41:00.919886   44572 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:41:00.919971   44572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:41:00.942580   44572 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:41:00.942609   44572 start.go:496] detecting cgroup driver to use...
	I1002 20:41:00.942681   44572 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:41:00.968952   44572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:41:00.992849   44572 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:41:00.992927   44572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:41:01.019880   44572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:41:01.040551   44572 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:41:01.197013   44572 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:41:01.416883   44572 docker.go:234] disabling docker service ...
	I1002 20:41:01.416959   44572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:41:01.435306   44572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:41:01.453021   44572 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:41:01.623540   44572 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:41:01.777048   44572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:41:01.794998   44572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:41:01.820820   44572 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1002 20:41:01.820883   44572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:41:01.835556   44572 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:41:01.835623   44572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:41:01.850760   44572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:41:01.865867   44572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:41:01.880864   44572 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:41:01.895948   44572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:41:01.910271   44572 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:41:01.934275   44572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:41:01.948480   44572 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:41:01.961676   44572 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 20:41:01.961752   44572 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 20:41:01.988839   44572 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:41:02.004058   44572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:41:02.165317   44572 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:41:02.295748   44572 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:41:02.295814   44572 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:41:02.302291   44572 start.go:564] Will wait 60s for crictl version
	I1002 20:41:02.302362   44572 ssh_runner.go:195] Run: which crictl
	I1002 20:41:02.307616   44572 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 20:41:02.358149   44572 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 20:41:02.358267   44572 ssh_runner.go:195] Run: crio --version
	I1002 20:41:02.391494   44572 ssh_runner.go:195] Run: crio --version
	I1002 20:41:02.425250   44572 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1002 20:41:02.426845   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetIP
	I1002 20:41:02.430019   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:02.430682   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:41:02.430713   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:02.430990   44572 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 20:41:02.436546   44572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:41:02.453809   44572 kubeadm.go:883] updating cluster {Name:test-preload-586629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-586629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:41:02.453920   44572 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1002 20:41:02.453977   44572 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:41:02.498644   44572 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1002 20:41:02.498717   44572 ssh_runner.go:195] Run: which lz4
	I1002 20:41:02.503711   44572 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 20:41:02.509953   44572 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 20:41:02.510010   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1002 20:41:04.251347   44572 crio.go:462] duration metric: took 1.747679242s to copy over tarball
	I1002 20:41:04.251421   44572 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 20:41:06.017136   44572 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.765680287s)
	I1002 20:41:06.017172   44572 crio.go:469] duration metric: took 1.765793807s to extract the tarball
	I1002 20:41:06.017183   44572 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 20:41:06.069104   44572 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:41:06.114675   44572 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:41:06.114698   44572 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:41:06.114705   44572 kubeadm.go:934] updating node { 192.168.39.49 8443 v1.32.0 crio true true} ...
	I1002 20:41:06.114806   44572 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-586629 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-586629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:41:06.114889   44572 ssh_runner.go:195] Run: crio config
	I1002 20:41:06.167287   44572 cni.go:84] Creating CNI manager for ""
	I1002 20:41:06.167310   44572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:41:06.167324   44572 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:41:06.167342   44572 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.49 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-586629 NodeName:test-preload-586629 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:41:06.167448   44572 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-586629"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.49"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.49"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:41:06.167511   44572 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1002 20:41:06.180588   44572 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:41:06.180673   44572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:41:06.193233   44572 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1002 20:41:06.215458   44572 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:41:06.242561   44572 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1002 20:41:06.269690   44572 ssh_runner.go:195] Run: grep 192.168.39.49	control-plane.minikube.internal$ /etc/hosts
	I1002 20:41:06.274698   44572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:41:06.294285   44572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:41:06.449009   44572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:41:06.470780   44572 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629 for IP: 192.168.39.49
	I1002 20:41:06.470810   44572 certs.go:195] generating shared ca certs ...
	I1002 20:41:06.470831   44572 certs.go:227] acquiring lock for ca certs: {Name:mk36b72fb138c08da6f63c209f5b6ddd4af4f5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:41:06.471030   44572 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key
	I1002 20:41:06.471096   44572 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key
	I1002 20:41:06.471112   44572 certs.go:257] generating profile certs ...
	I1002 20:41:06.471223   44572 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/client.key
	I1002 20:41:06.471319   44572 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/apiserver.key.5e28599e
	I1002 20:41:06.471380   44572 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/proxy-client.key
	I1002 20:41:06.471547   44572 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/13449.pem (1338 bytes)
	W1002 20:41:06.471597   44572 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9524/.minikube/certs/13449_empty.pem, impossibly tiny 0 bytes
	I1002 20:41:06.471612   44572 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:41:06.471649   44572 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:41:06.471687   44572 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:41:06.471745   44572 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem (1679 bytes)
	I1002 20:41:06.471808   44572 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem (1708 bytes)
	I1002 20:41:06.472561   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:41:06.511091   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:41:06.545041   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:41:06.579436   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:41:06.612518   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 20:41:06.646013   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:41:06.679759   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:41:06.713624   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:41:06.746598   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:41:06.779291   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/certs/13449.pem --> /usr/share/ca-certificates/13449.pem (1338 bytes)
	I1002 20:41:06.810818   44572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem --> /usr/share/ca-certificates/134492.pem (1708 bytes)
	I1002 20:41:06.842313   44572 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:41:06.864822   44572 ssh_runner.go:195] Run: openssl version
	I1002 20:41:06.871732   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134492.pem && ln -fs /usr/share/ca-certificates/134492.pem /etc/ssl/certs/134492.pem"
	I1002 20:41:06.886604   44572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134492.pem
	I1002 20:41:06.892374   44572 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 19:56 /usr/share/ca-certificates/134492.pem
	I1002 20:41:06.892443   44572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134492.pem
	I1002 20:41:06.900292   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134492.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:41:06.914961   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:41:06.928611   44572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:41:06.934411   44572 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:48 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:41:06.934461   44572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:41:06.942196   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:41:06.955997   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13449.pem && ln -fs /usr/share/ca-certificates/13449.pem /etc/ssl/certs/13449.pem"
	I1002 20:41:06.969946   44572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13449.pem
	I1002 20:41:06.975887   44572 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 19:56 /usr/share/ca-certificates/13449.pem
	I1002 20:41:06.975948   44572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13449.pem
	I1002 20:41:06.984471   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13449.pem /etc/ssl/certs/51391683.0"
	I1002 20:41:06.999178   44572 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:41:07.005348   44572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:41:07.013646   44572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:41:07.022377   44572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:41:07.031062   44572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:41:07.039505   44572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:41:07.047993   44572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:41:07.056567   44572 kubeadm.go:400] StartCluster: {Name:test-preload-586629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-586629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:41:07.056658   44572 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:41:07.056716   44572 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:41:07.104899   44572 cri.go:89] found id: ""
	I1002 20:41:07.104981   44572 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:41:07.119194   44572 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:41:07.119213   44572 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:41:07.119256   44572 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:41:07.133340   44572 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:41:07.133739   44572 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-586629" does not appear in /home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 20:41:07.133832   44572 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9524/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-586629" cluster setting kubeconfig missing "test-preload-586629" context setting]
	I1002 20:41:07.134083   44572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/kubeconfig: {Name:mk0c75eb22a83f2f7ea4f564360059d4e6d21b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:41:07.134571   44572 kapi.go:59] client config for test-preload-586629: &rest.Config{Host:"https://192.168.39.49:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:41:07.134950   44572 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:41:07.134971   44572 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:41:07.134978   44572 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:41:07.134984   44572 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:41:07.134989   44572 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:41:07.135307   44572 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:41:07.148852   44572 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.49
	I1002 20:41:07.148886   44572 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:41:07.148902   44572 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:41:07.148949   44572 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:41:07.192546   44572 cri.go:89] found id: ""
	I1002 20:41:07.192625   44572 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:41:07.215949   44572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:41:07.230548   44572 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:41:07.230576   44572 kubeadm.go:157] found existing configuration files:
	
	I1002 20:41:07.230638   44572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:41:07.242934   44572 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:41:07.243031   44572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:41:07.256043   44572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:41:07.268493   44572 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:41:07.268566   44572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:41:07.282147   44572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:41:07.293954   44572 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:41:07.294010   44572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:41:07.307340   44572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:41:07.319885   44572 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:41:07.319976   44572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:41:07.333312   44572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:41:07.347743   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:41:07.409995   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:41:08.603497   44572 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.193459583s)
	I1002 20:41:08.603560   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:41:08.861053   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:41:08.936609   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:41:09.038652   44572 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:41:09.038739   44572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:41:09.538929   44572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:41:10.038847   44572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:41:10.538907   44572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:41:10.568565   44572 api_server.go:72] duration metric: took 1.529925635s to wait for apiserver process to appear ...
	I1002 20:41:10.568601   44572 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:41:10.568625   44572 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1002 20:41:10.569217   44572 api_server.go:269] stopped: https://192.168.39.49:8443/healthz: Get "https://192.168.39.49:8443/healthz": dial tcp 192.168.39.49:8443: connect: connection refused
	I1002 20:41:11.068893   44572 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1002 20:41:13.346708   44572 api_server.go:279] https://192.168.39.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:41:13.346753   44572 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:41:13.346772   44572 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1002 20:41:13.414028   44572 api_server.go:279] https://192.168.39.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:41:13.414054   44572 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:41:13.569424   44572 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1002 20:41:13.574426   44572 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:41:13.574451   44572 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:41:14.069226   44572 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1002 20:41:14.076356   44572 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:41:14.076391   44572 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:41:14.568817   44572 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1002 20:41:14.582923   44572 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:41:14.582952   44572 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:41:15.069777   44572 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1002 20:41:15.076451   44572 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I1002 20:41:15.084473   44572 api_server.go:141] control plane version: v1.32.0
	I1002 20:41:15.084501   44572 api_server.go:131] duration metric: took 4.515893325s to wait for apiserver health ...
	I1002 20:41:15.084510   44572 cni.go:84] Creating CNI manager for ""
	I1002 20:41:15.084516   44572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:41:15.086061   44572 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:41:15.087176   44572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:41:15.106867   44572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 20:41:15.144388   44572 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:41:15.148827   44572 system_pods.go:59] 7 kube-system pods found
	I1002 20:41:15.148865   44572 system_pods.go:61] "coredns-668d6bf9bc-zjgzx" [c4a593fb-cecb-4fa2-80c6-cd32fd451c95] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 20:41:15.148876   44572 system_pods.go:61] "etcd-test-preload-586629" [955daccb-223f-4c22-b16d-7115cf92208b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:41:15.148886   44572 system_pods.go:61] "kube-apiserver-test-preload-586629" [2d063867-1a6b-4ae1-92e3-090aef17e92c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:41:15.148897   44572 system_pods.go:61] "kube-controller-manager-test-preload-586629" [fac698e8-1fc4-438e-92e6-c8ac223be468] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:41:15.148904   44572 system_pods.go:61] "kube-proxy-pf6nq" [dd480651-06ec-4c01-8dd8-7ee5c2f56a48] Running
	I1002 20:41:15.148917   44572 system_pods.go:61] "kube-scheduler-test-preload-586629" [0b9425d8-b90d-4ec5-931c-bf89c6813c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:41:15.148931   44572 system_pods.go:61] "storage-provisioner" [3775d2ea-6616-40c6-873d-9a459a4d74bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 20:41:15.148947   44572 system_pods.go:74] duration metric: took 4.530672ms to wait for pod list to return data ...
	I1002 20:41:15.148959   44572 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:41:15.155868   44572 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 20:41:15.155899   44572 node_conditions.go:123] node cpu capacity is 2
	I1002 20:41:15.155913   44572 node_conditions.go:105] duration metric: took 6.948334ms to run NodePressure ...
	I1002 20:41:15.155984   44572 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:41:15.430495   44572 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 20:41:15.438679   44572 kubeadm.go:743] kubelet initialised
	I1002 20:41:15.438703   44572 kubeadm.go:744] duration metric: took 8.178018ms waiting for restarted kubelet to initialise ...
	I1002 20:41:15.438734   44572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:41:15.461021   44572 ops.go:34] apiserver oom_adj: -16
	I1002 20:41:15.461054   44572 kubeadm.go:601] duration metric: took 8.341832749s to restartPrimaryControlPlane
	I1002 20:41:15.461067   44572 kubeadm.go:402] duration metric: took 8.404505838s to StartCluster
	I1002 20:41:15.461089   44572 settings.go:142] acquiring lock: {Name:mk6a3acbc81c910cfbdc018b811be13c1e438c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:41:15.461189   44572 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 20:41:15.462185   44572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/kubeconfig: {Name:mk0c75eb22a83f2f7ea4f564360059d4e6d21b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:41:15.462501   44572 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:41:15.462642   44572 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:41:15.462748   44572 config.go:182] Loaded profile config "test-preload-586629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1002 20:41:15.462769   44572 addons.go:69] Setting storage-provisioner=true in profile "test-preload-586629"
	I1002 20:41:15.462795   44572 addons.go:69] Setting default-storageclass=true in profile "test-preload-586629"
	I1002 20:41:15.462824   44572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-586629"
	I1002 20:41:15.462799   44572 addons.go:238] Setting addon storage-provisioner=true in "test-preload-586629"
	W1002 20:41:15.462929   44572 addons.go:247] addon storage-provisioner should already be in state true
	I1002 20:41:15.462962   44572 host.go:66] Checking if "test-preload-586629" exists ...
	I1002 20:41:15.463278   44572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:41:15.463331   44572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:41:15.463390   44572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:41:15.463439   44572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:41:15.464040   44572 out.go:179] * Verifying Kubernetes components...
	I1002 20:41:15.465319   44572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:41:15.477415   44572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
	I1002 20:41:15.477444   44572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
	I1002 20:41:15.478034   44572 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:41:15.478041   44572 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:41:15.478528   44572 main.go:141] libmachine: Using API Version  1
	I1002 20:41:15.478541   44572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:41:15.478697   44572 main.go:141] libmachine: Using API Version  1
	I1002 20:41:15.478738   44572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:41:15.478948   44572 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:41:15.479096   44572 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:41:15.479139   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetState
	I1002 20:41:15.479691   44572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:41:15.479763   44572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:41:15.481837   44572 kapi.go:59] client config for test-preload-586629: &rest.Config{Host:"https://192.168.39.49:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:41:15.482099   44572 addons.go:238] Setting addon default-storageclass=true in "test-preload-586629"
	W1002 20:41:15.482113   44572 addons.go:247] addon default-storageclass should already be in state true
	I1002 20:41:15.482135   44572 host.go:66] Checking if "test-preload-586629" exists ...
	I1002 20:41:15.482412   44572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:41:15.482453   44572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:41:15.494685   44572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I1002 20:41:15.495223   44572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I1002 20:41:15.495290   44572 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:41:15.495665   44572 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:41:15.495752   44572 main.go:141] libmachine: Using API Version  1
	I1002 20:41:15.495778   44572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:41:15.496149   44572 main.go:141] libmachine: Using API Version  1
	I1002 20:41:15.496165   44572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:41:15.496181   44572 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:41:15.496352   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetState
	I1002 20:41:15.496510   44572 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:41:15.497047   44572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:41:15.497090   44572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:41:15.498525   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:41:15.500439   44572 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:41:15.502955   44572 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:41:15.502970   44572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:41:15.502986   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:41:15.506795   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:15.507312   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:41:15.507339   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:15.507549   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:41:15.507705   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:41:15.507851   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:41:15.507966   44572 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa Username:docker}
	I1002 20:41:15.512086   44572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I1002 20:41:15.512623   44572 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:41:15.513113   44572 main.go:141] libmachine: Using API Version  1
	I1002 20:41:15.513134   44572 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:41:15.513543   44572 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:41:15.513713   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetState
	I1002 20:41:15.515540   44572 main.go:141] libmachine: (test-preload-586629) Calling .DriverName
	I1002 20:41:15.515859   44572 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:41:15.515872   44572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:41:15.515886   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHHostname
	I1002 20:41:15.519136   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:15.519601   44572 main.go:141] libmachine: (test-preload-586629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:e8:e9", ip: ""} in network mk-test-preload-586629: {Iface:virbr1 ExpiryTime:2025-10-02 21:40:55 +0000 UTC Type:0 Mac:52:54:00:50:e8:e9 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:test-preload-586629 Clientid:01:52:54:00:50:e8:e9}
	I1002 20:41:15.519647   44572 main.go:141] libmachine: (test-preload-586629) DBG | domain test-preload-586629 has defined IP address 192.168.39.49 and MAC address 52:54:00:50:e8:e9 in network mk-test-preload-586629
	I1002 20:41:15.519758   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHPort
	I1002 20:41:15.519951   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHKeyPath
	I1002 20:41:15.520122   44572 main.go:141] libmachine: (test-preload-586629) Calling .GetSSHUsername
	I1002 20:41:15.520278   44572 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/test-preload-586629/id_rsa Username:docker}
	I1002 20:41:15.715398   44572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:41:15.736492   44572 node_ready.go:35] waiting up to 6m0s for node "test-preload-586629" to be "Ready" ...
	I1002 20:41:15.900216   44572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:41:15.911256   44572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:41:16.628062   44572 main.go:141] libmachine: Making call to close driver server
	I1002 20:41:16.628088   44572 main.go:141] libmachine: (test-preload-586629) Calling .Close
	I1002 20:41:16.628137   44572 main.go:141] libmachine: Making call to close driver server
	I1002 20:41:16.628165   44572 main.go:141] libmachine: (test-preload-586629) Calling .Close
	I1002 20:41:16.628394   44572 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:41:16.628413   44572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:41:16.628440   44572 main.go:141] libmachine: Making call to close driver server
	I1002 20:41:16.628456   44572 main.go:141] libmachine: (test-preload-586629) Calling .Close
	I1002 20:41:16.628467   44572 main.go:141] libmachine: (test-preload-586629) DBG | Closing plugin on server side
	I1002 20:41:16.628491   44572 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:41:16.628507   44572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:41:16.628515   44572 main.go:141] libmachine: Making call to close driver server
	I1002 20:41:16.628522   44572 main.go:141] libmachine: (test-preload-586629) Calling .Close
	I1002 20:41:16.628664   44572 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:41:16.628679   44572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:41:16.628706   44572 main.go:141] libmachine: (test-preload-586629) DBG | Closing plugin on server side
	I1002 20:41:16.628735   44572 main.go:141] libmachine: (test-preload-586629) DBG | Closing plugin on server side
	I1002 20:41:16.628757   44572 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:41:16.628769   44572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:41:16.639611   44572 main.go:141] libmachine: Making call to close driver server
	I1002 20:41:16.639654   44572 main.go:141] libmachine: (test-preload-586629) Calling .Close
	I1002 20:41:16.639945   44572 main.go:141] libmachine: Successfully made call to close driver server
	I1002 20:41:16.639965   44572 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 20:41:16.639978   44572 main.go:141] libmachine: (test-preload-586629) DBG | Closing plugin on server side
	I1002 20:41:16.641684   44572 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 20:41:16.642828   44572 addons.go:514] duration metric: took 1.180197016s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1002 20:41:17.740139   44572 node_ready.go:57] node "test-preload-586629" has "Ready":"False" status (will retry)
	W1002 20:41:20.239936   44572 node_ready.go:57] node "test-preload-586629" has "Ready":"False" status (will retry)
	W1002 20:41:22.240032   44572 node_ready.go:57] node "test-preload-586629" has "Ready":"False" status (will retry)
	I1002 20:41:24.239604   44572 node_ready.go:49] node "test-preload-586629" is "Ready"
	I1002 20:41:24.239647   44572 node_ready.go:38] duration metric: took 8.503084538s for node "test-preload-586629" to be "Ready" ...
	I1002 20:41:24.239664   44572 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:41:24.239718   44572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:41:24.260353   44572 api_server.go:72] duration metric: took 8.797807977s to wait for apiserver process to appear ...
	I1002 20:41:24.260374   44572 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:41:24.260397   44572 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1002 20:41:24.265795   44572 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I1002 20:41:24.267124   44572 api_server.go:141] control plane version: v1.32.0
	I1002 20:41:24.267170   44572 api_server.go:131] duration metric: took 6.790027ms to wait for apiserver health ...
	I1002 20:41:24.267179   44572 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:41:24.271372   44572 system_pods.go:59] 7 kube-system pods found
	I1002 20:41:24.271400   44572 system_pods.go:61] "coredns-668d6bf9bc-zjgzx" [c4a593fb-cecb-4fa2-80c6-cd32fd451c95] Running
	I1002 20:41:24.271408   44572 system_pods.go:61] "etcd-test-preload-586629" [955daccb-223f-4c22-b16d-7115cf92208b] Running
	I1002 20:41:24.271420   44572 system_pods.go:61] "kube-apiserver-test-preload-586629" [2d063867-1a6b-4ae1-92e3-090aef17e92c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:41:24.271426   44572 system_pods.go:61] "kube-controller-manager-test-preload-586629" [fac698e8-1fc4-438e-92e6-c8ac223be468] Running
	I1002 20:41:24.271437   44572 system_pods.go:61] "kube-proxy-pf6nq" [dd480651-06ec-4c01-8dd8-7ee5c2f56a48] Running
	I1002 20:41:24.271443   44572 system_pods.go:61] "kube-scheduler-test-preload-586629" [0b9425d8-b90d-4ec5-931c-bf89c6813c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:41:24.271450   44572 system_pods.go:61] "storage-provisioner" [3775d2ea-6616-40c6-873d-9a459a4d74bb] Running
	I1002 20:41:24.271457   44572 system_pods.go:74] duration metric: took 4.272191ms to wait for pod list to return data ...
	I1002 20:41:24.271465   44572 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:41:24.274573   44572 default_sa.go:45] found service account: "default"
	I1002 20:41:24.274594   44572 default_sa.go:55] duration metric: took 3.121894ms for default service account to be created ...
	I1002 20:41:24.274601   44572 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:41:24.277202   44572 system_pods.go:86] 7 kube-system pods found
	I1002 20:41:24.277221   44572 system_pods.go:89] "coredns-668d6bf9bc-zjgzx" [c4a593fb-cecb-4fa2-80c6-cd32fd451c95] Running
	I1002 20:41:24.277226   44572 system_pods.go:89] "etcd-test-preload-586629" [955daccb-223f-4c22-b16d-7115cf92208b] Running
	I1002 20:41:24.277240   44572 system_pods.go:89] "kube-apiserver-test-preload-586629" [2d063867-1a6b-4ae1-92e3-090aef17e92c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:41:24.277248   44572 system_pods.go:89] "kube-controller-manager-test-preload-586629" [fac698e8-1fc4-438e-92e6-c8ac223be468] Running
	I1002 20:41:24.277253   44572 system_pods.go:89] "kube-proxy-pf6nq" [dd480651-06ec-4c01-8dd8-7ee5c2f56a48] Running
	I1002 20:41:24.277259   44572 system_pods.go:89] "kube-scheduler-test-preload-586629" [0b9425d8-b90d-4ec5-931c-bf89c6813c8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:41:24.277264   44572 system_pods.go:89] "storage-provisioner" [3775d2ea-6616-40c6-873d-9a459a4d74bb] Running
	I1002 20:41:24.277271   44572 system_pods.go:126] duration metric: took 2.665182ms to wait for k8s-apps to be running ...
	I1002 20:41:24.277276   44572 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:41:24.277316   44572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:41:24.294712   44572 system_svc.go:56] duration metric: took 17.425406ms WaitForService to wait for kubelet
	I1002 20:41:24.294761   44572 kubeadm.go:586] duration metric: took 8.832219462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:41:24.294789   44572 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:41:24.298416   44572 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 20:41:24.298440   44572 node_conditions.go:123] node cpu capacity is 2
	I1002 20:41:24.298453   44572 node_conditions.go:105] duration metric: took 3.657806ms to run NodePressure ...
	I1002 20:41:24.298468   44572 start.go:242] waiting for startup goroutines ...
	I1002 20:41:24.298482   44572 start.go:247] waiting for cluster config update ...
	I1002 20:41:24.298496   44572 start.go:256] writing updated cluster config ...
	I1002 20:41:24.298791   44572 ssh_runner.go:195] Run: rm -f paused
	I1002 20:41:24.304240   44572 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:41:24.304824   44572 kapi.go:59] client config for test-preload-586629: &rest.Config{Host:"https://192.168.39.49:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/test-preload-586629/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:41:24.307584   44572 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-zjgzx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:24.313207   44572 pod_ready.go:94] pod "coredns-668d6bf9bc-zjgzx" is "Ready"
	I1002 20:41:24.313224   44572 pod_ready.go:86] duration metric: took 5.61597ms for pod "coredns-668d6bf9bc-zjgzx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:24.315651   44572 pod_ready.go:83] waiting for pod "etcd-test-preload-586629" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:24.320164   44572 pod_ready.go:94] pod "etcd-test-preload-586629" is "Ready"
	I1002 20:41:24.320181   44572 pod_ready.go:86] duration metric: took 4.513139ms for pod "etcd-test-preload-586629" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:24.321807   44572 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-586629" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 20:41:26.328599   44572 pod_ready.go:104] pod "kube-apiserver-test-preload-586629" is not "Ready", error: <nil>
	I1002 20:41:26.827674   44572 pod_ready.go:94] pod "kube-apiserver-test-preload-586629" is "Ready"
	I1002 20:41:26.827708   44572 pod_ready.go:86] duration metric: took 2.505882951s for pod "kube-apiserver-test-preload-586629" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:26.829760   44572 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-586629" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:26.834054   44572 pod_ready.go:94] pod "kube-controller-manager-test-preload-586629" is "Ready"
	I1002 20:41:26.834075   44572 pod_ready.go:86] duration metric: took 4.295699ms for pod "kube-controller-manager-test-preload-586629" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:26.908363   44572 pod_ready.go:83] waiting for pod "kube-proxy-pf6nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:27.308424   44572 pod_ready.go:94] pod "kube-proxy-pf6nq" is "Ready"
	I1002 20:41:27.308450   44572 pod_ready.go:86] duration metric: took 400.064009ms for pod "kube-proxy-pf6nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:27.508544   44572 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-586629" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:29.108886   44572 pod_ready.go:94] pod "kube-scheduler-test-preload-586629" is "Ready"
	I1002 20:41:29.108913   44572 pod_ready.go:86] duration metric: took 1.600345264s for pod "kube-scheduler-test-preload-586629" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:41:29.108924   44572 pod_ready.go:40] duration metric: took 4.804639042s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:41:29.150611   44572 start.go:627] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1002 20:41:29.152235   44572 out.go:203] 
	W1002 20:41:29.153584   44572 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1002 20:41:29.154699   44572 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1002 20:41:29.155827   44572 out.go:179] * Done! kubectl is now configured to use "test-preload-586629" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.111971323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=855466f0-414c-4086-932e-d4f1606b443e name=/runtime.v1.RuntimeService/Version
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.113734954Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e43b46c-b111-422a-8cc7-1a73d5f8d910 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.114191310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437690114170011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e43b46c-b111-422a-8cc7-1a73d5f8d910 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.114999332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b4bc5bf-3537-405a-9587-0197174e4e89 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.115066227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b4bc5bf-3537-405a-9587-0197174e4e89 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.115209837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2507f0a85620805dae9f4d443ac59046d4d592c32b39bb2700c8bac7edfb06cc,PodSandboxId:37cfea36790d230a51c34bc45e056d8d0f8adb547a7cbfd13f488d70f6b9dd12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759437682053322371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zjgzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a593fb-cecb-4fa2-80c6-cd32fd451c95,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b31df38f5433cb9919af8ca016ee00eeef5ed9adfa441c08a68c9d37726e48,PodSandboxId:dc13f2f7f1dcd6ae24688005d529d93b4d5ab1d090463d8a2d09aee3850ea58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759437674604786730,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3775d2ea-6616-40c6-873d-9a459a4d74bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da925084d2102b6a1e2b1ab3501b5e44d59d4e1cd714d69d4f503c8c05ba5c6e,PodSandboxId:b01c6c75b7d4cf9a169f84c6acf4a82adfe62da14d18a724bf47b1bd480ece26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759437674504100363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf6nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd
480651-06ec-4c01-8dd8-7ee5c2f56a48,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1c664a432879275ceb223b86a9387c4ed3face89298d93a22adcae1f3df1cf,PodSandboxId:0d4726c4d4b8431a9d3aa6e10b6764323378733e3b013f38b5459d1774a2730c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759437670364918009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af85cc73d
10530bba42a7b68728bbea7,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e95e9ccf00e2fc03c33b7dcc8c9b7684a9ec37435af426cce62a7c93816032c,PodSandboxId:71317960ea963efdd7b58f1f98a44495b1ef8642cc94a593b15cee95946f894e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759437670328533787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eec566e29c2f2f6bf85ecdad169c41f,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28130942c78262c05728537a31bd3076be81dc87f06e10ae8ecabb65e5c3d5f,PodSandboxId:3705177513cf4df95c2a64c7b93c6e43ea68e5ab563c1a32f7faf427afe55dfa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759437670260167605,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1bdf77f159d6b5382fa565df76f2ccb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5682b042e990a34f17b1ea24e027d8d1f943d749a65b23870091668a0b4274,PodSandboxId:691d8022bc47f282176998257bedb8804dea5df5367b753de112b8a23d907760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759437670250317941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 224357c5f2aa284d16abf39908f71b93,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b4bc5bf-3537-405a-9587-0197174e4e89 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.155303035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75afc432-2210-4269-b1be-c17f28f44b5a name=/runtime.v1.RuntimeService/Version
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.155421175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75afc432-2210-4269-b1be-c17f28f44b5a name=/runtime.v1.RuntimeService/Version
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.156470122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f14cc3f2-845b-4ab5-abfd-53f773306b4c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.157241992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437690157204501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f14cc3f2-845b-4ab5-abfd-53f773306b4c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.157988069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38e3d367-2b8a-4f96-a413-b4c4b7e790d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.158066468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38e3d367-2b8a-4f96-a413-b4c4b7e790d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.158421228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2507f0a85620805dae9f4d443ac59046d4d592c32b39bb2700c8bac7edfb06cc,PodSandboxId:37cfea36790d230a51c34bc45e056d8d0f8adb547a7cbfd13f488d70f6b9dd12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759437682053322371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zjgzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a593fb-cecb-4fa2-80c6-cd32fd451c95,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b31df38f5433cb9919af8ca016ee00eeef5ed9adfa441c08a68c9d37726e48,PodSandboxId:dc13f2f7f1dcd6ae24688005d529d93b4d5ab1d090463d8a2d09aee3850ea58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759437674604786730,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3775d2ea-6616-40c6-873d-9a459a4d74bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da925084d2102b6a1e2b1ab3501b5e44d59d4e1cd714d69d4f503c8c05ba5c6e,PodSandboxId:b01c6c75b7d4cf9a169f84c6acf4a82adfe62da14d18a724bf47b1bd480ece26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759437674504100363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf6nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd
480651-06ec-4c01-8dd8-7ee5c2f56a48,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1c664a432879275ceb223b86a9387c4ed3face89298d93a22adcae1f3df1cf,PodSandboxId:0d4726c4d4b8431a9d3aa6e10b6764323378733e3b013f38b5459d1774a2730c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759437670364918009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af85cc73d
10530bba42a7b68728bbea7,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e95e9ccf00e2fc03c33b7dcc8c9b7684a9ec37435af426cce62a7c93816032c,PodSandboxId:71317960ea963efdd7b58f1f98a44495b1ef8642cc94a593b15cee95946f894e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759437670328533787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eec566e29c2f2f6bf85ecdad169c41f,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28130942c78262c05728537a31bd3076be81dc87f06e10ae8ecabb65e5c3d5f,PodSandboxId:3705177513cf4df95c2a64c7b93c6e43ea68e5ab563c1a32f7faf427afe55dfa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759437670260167605,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1bdf77f159d6b5382fa565df76f2ccb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5682b042e990a34f17b1ea24e027d8d1f943d749a65b23870091668a0b4274,PodSandboxId:691d8022bc47f282176998257bedb8804dea5df5367b753de112b8a23d907760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759437670250317941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 224357c5f2aa284d16abf39908f71b93,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38e3d367-2b8a-4f96-a413-b4c4b7e790d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.171818310Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c447ee6a-67d5-4e7b-8e8a-f36923a4de1a name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.171990057Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:37cfea36790d230a51c34bc45e056d8d0f8adb547a7cbfd13f488d70f6b9dd12,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-zjgzx,Uid:c4a593fb-cecb-4fa2-80c6-cd32fd451c95,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759437681827322534,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-zjgzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a593fb-cecb-4fa2-80c6-cd32fd451c95,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-02T20:41:13.955382795Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b01c6c75b7d4cf9a169f84c6acf4a82adfe62da14d18a724bf47b1bd480ece26,Metadata:&PodSandboxMetadata{Name:kube-proxy-pf6nq,Uid:dd480651-06ec-4c01-8dd8-7ee5c2f56a48,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1759437674267807182,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pf6nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd480651-06ec-4c01-8dd8-7ee5c2f56a48,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-02T20:41:13.955378264Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc13f2f7f1dcd6ae24688005d529d93b4d5ab1d090463d8a2d09aee3850ea58f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3775d2ea-6616-40c6-873d-9a459a4d74bb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759437674266000161,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3775d2ea-6616-40c6-873d-9a45
9a4d74bb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-02T20:41:13.955381558Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:71317960ea963efdd7b58f1f98a44495b1ef8642cc94a593b15cee95946f894e,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-586629,Uid:6eec566e29c2f2f6b
f85ecdad169c41f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759437670055932143,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eec566e29c2f2f6bf85ecdad169c41f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.49:2379,kubernetes.io/config.hash: 6eec566e29c2f2f6bf85ecdad169c41f,kubernetes.io/config.seen: 2025-10-02T20:41:09.015104297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0d4726c4d4b8431a9d3aa6e10b6764323378733e3b013f38b5459d1774a2730c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-586629,Uid:af85cc73d10530bba42a7b68728bbea7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759437670052376629,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-pre
load-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af85cc73d10530bba42a7b68728bbea7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: af85cc73d10530bba42a7b68728bbea7,kubernetes.io/config.seen: 2025-10-02T20:41:08.952566770Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3705177513cf4df95c2a64c7b93c6e43ea68e5ab563c1a32f7faf427afe55dfa,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-586629,Uid:f1bdf77f159d6b5382fa565df76f2ccb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759437670019377406,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1bdf77f159d6b5382fa565df76f2ccb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.49:8443,kubernetes.io/config.hash: f1bdf77f159d6b538
2fa565df76f2ccb,kubernetes.io/config.seen: 2025-10-02T20:41:08.952558562Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:691d8022bc47f282176998257bedb8804dea5df5367b753de112b8a23d907760,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-586629,Uid:224357c5f2aa284d16abf39908f71b93,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1759437670019109311,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 224357c5f2aa284d16abf39908f71b93,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 224357c5f2aa284d16abf39908f71b93,kubernetes.io/config.seen: 2025-10-02T20:41:08.952565499Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c447ee6a-67d5-4e7b-8e8a-f36923a4de1a name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.172769840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69ca16e2-82dd-4469-8d5a-b07e1ec81515 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.172824181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69ca16e2-82dd-4469-8d5a-b07e1ec81515 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.172991806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2507f0a85620805dae9f4d443ac59046d4d592c32b39bb2700c8bac7edfb06cc,PodSandboxId:37cfea36790d230a51c34bc45e056d8d0f8adb547a7cbfd13f488d70f6b9dd12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759437682053322371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zjgzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a593fb-cecb-4fa2-80c6-cd32fd451c95,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b31df38f5433cb9919af8ca016ee00eeef5ed9adfa441c08a68c9d37726e48,PodSandboxId:dc13f2f7f1dcd6ae24688005d529d93b4d5ab1d090463d8a2d09aee3850ea58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759437674604786730,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3775d2ea-6616-40c6-873d-9a459a4d74bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da925084d2102b6a1e2b1ab3501b5e44d59d4e1cd714d69d4f503c8c05ba5c6e,PodSandboxId:b01c6c75b7d4cf9a169f84c6acf4a82adfe62da14d18a724bf47b1bd480ece26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759437674504100363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf6nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd
480651-06ec-4c01-8dd8-7ee5c2f56a48,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1c664a432879275ceb223b86a9387c4ed3face89298d93a22adcae1f3df1cf,PodSandboxId:0d4726c4d4b8431a9d3aa6e10b6764323378733e3b013f38b5459d1774a2730c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759437670364918009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af85cc73d
10530bba42a7b68728bbea7,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e95e9ccf00e2fc03c33b7dcc8c9b7684a9ec37435af426cce62a7c93816032c,PodSandboxId:71317960ea963efdd7b58f1f98a44495b1ef8642cc94a593b15cee95946f894e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759437670328533787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eec566e29c2f2f6bf85ecdad169c41f,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28130942c78262c05728537a31bd3076be81dc87f06e10ae8ecabb65e5c3d5f,PodSandboxId:3705177513cf4df95c2a64c7b93c6e43ea68e5ab563c1a32f7faf427afe55dfa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759437670260167605,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1bdf77f159d6b5382fa565df76f2ccb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5682b042e990a34f17b1ea24e027d8d1f943d749a65b23870091668a0b4274,PodSandboxId:691d8022bc47f282176998257bedb8804dea5df5367b753de112b8a23d907760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759437670250317941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 224357c5f2aa284d16abf39908f71b93,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69ca16e2-82dd-4469-8d5a-b07e1ec81515 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.202587240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d16129b-22a5-444f-8270-04420b7fbb43 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.202708862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d16129b-22a5-444f-8270-04420b7fbb43 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.204041956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60d13b7b-5a73-4d01-bfe3-eb2a37aa9ba4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.204483287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437690204460852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60d13b7b-5a73-4d01-bfe3-eb2a37aa9ba4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.205118846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5516939c-dfa6-4d39-acc1-7b1ea5981489 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.205173812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5516939c-dfa6-4d39-acc1-7b1ea5981489 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:41:30 test-preload-586629 crio[840]: time="2025-10-02 20:41:30.205353008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2507f0a85620805dae9f4d443ac59046d4d592c32b39bb2700c8bac7edfb06cc,PodSandboxId:37cfea36790d230a51c34bc45e056d8d0f8adb547a7cbfd13f488d70f6b9dd12,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1759437682053322371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-zjgzx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4a593fb-cecb-4fa2-80c6-cd32fd451c95,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b31df38f5433cb9919af8ca016ee00eeef5ed9adfa441c08a68c9d37726e48,PodSandboxId:dc13f2f7f1dcd6ae24688005d529d93b4d5ab1d090463d8a2d09aee3850ea58f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1759437674604786730,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 3775d2ea-6616-40c6-873d-9a459a4d74bb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da925084d2102b6a1e2b1ab3501b5e44d59d4e1cd714d69d4f503c8c05ba5c6e,PodSandboxId:b01c6c75b7d4cf9a169f84c6acf4a82adfe62da14d18a724bf47b1bd480ece26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1759437674504100363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf6nq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd
480651-06ec-4c01-8dd8-7ee5c2f56a48,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b1c664a432879275ceb223b86a9387c4ed3face89298d93a22adcae1f3df1cf,PodSandboxId:0d4726c4d4b8431a9d3aa6e10b6764323378733e3b013f38b5459d1774a2730c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1759437670364918009,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af85cc73d
10530bba42a7b68728bbea7,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e95e9ccf00e2fc03c33b7dcc8c9b7684a9ec37435af426cce62a7c93816032c,PodSandboxId:71317960ea963efdd7b58f1f98a44495b1ef8642cc94a593b15cee95946f894e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1759437670328533787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6eec566e29c2f2f6bf85ecdad169c41f,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f28130942c78262c05728537a31bd3076be81dc87f06e10ae8ecabb65e5c3d5f,PodSandboxId:3705177513cf4df95c2a64c7b93c6e43ea68e5ab563c1a32f7faf427afe55dfa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1759437670260167605,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1bdf77f159d6b5382fa565df76f2ccb,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5682b042e990a34f17b1ea24e027d8d1f943d749a65b23870091668a0b4274,PodSandboxId:691d8022bc47f282176998257bedb8804dea5df5367b753de112b8a23d907760,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1759437670250317941,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-586629,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 224357c5f2aa284d16abf39908f71b93,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5516939c-dfa6-4d39-acc1-7b1ea5981489 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2507f0a856208       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   8 seconds ago       Running             coredns                   1                   37cfea36790d2       coredns-668d6bf9bc-zjgzx
	d7b31df38f543       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       2                   dc13f2f7f1dcd       storage-provisioner
	da925084d2102       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   b01c6c75b7d4c       kube-proxy-pf6nq
	5b1c664a43287       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   0d4726c4d4b84       kube-scheduler-test-preload-586629
	5e95e9ccf00e2       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   71317960ea963       etcd-test-preload-586629
	f28130942c782       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   3705177513cf4       kube-apiserver-test-preload-586629
	fd5682b042e99       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   691d8022bc47f       kube-controller-manager-test-preload-586629
	
	
	==> coredns [2507f0a85620805dae9f4d443ac59046d4d592c32b39bb2700c8bac7edfb06cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39802 - 15648 "HINFO IN 8096471467150376366.394624627547975529. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039251044s
	
	
	==> describe nodes <==
	Name:               test-preload-586629
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-586629
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=test-preload-586629
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_39_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:39:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-586629
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:41:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:41:23 +0000   Thu, 02 Oct 2025 20:39:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:41:23 +0000   Thu, 02 Oct 2025 20:39:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:41:23 +0000   Thu, 02 Oct 2025 20:39:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:41:23 +0000   Thu, 02 Oct 2025 20:41:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.49
	  Hostname:    test-preload-586629
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 9273ddd99a4240c5bd07ebaac7fbad81
	  System UUID:                9273ddd9-9a42-40c5-bd07-ebaac7fbad81
	  Boot ID:                    01cec841-3df2-4b90-92f8-483f3350f50b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-zjgzx                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     115s
	  kube-system                 etcd-test-preload-586629                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m
	  kube-system                 kube-apiserver-test-preload-586629             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-test-preload-586629    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-pf6nq                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-test-preload-586629             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 113s                 kube-proxy       
	  Normal   Starting                 15s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node test-preload-586629 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node test-preload-586629 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node test-preload-586629 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     2m                   kubelet          Node test-preload-586629 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m                   kubelet          Node test-preload-586629 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m                   kubelet          Node test-preload-586629 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m                   kubelet          Starting kubelet.
	  Normal   NodeReady                119s                 kubelet          Node test-preload-586629 status is now: NodeReady
	  Normal   RegisteredNode           116s                 node-controller  Node test-preload-586629 event: Registered Node test-preload-586629 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 22s)    kubelet          Node test-preload-586629 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 22s)    kubelet          Node test-preload-586629 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 22s)    kubelet          Node test-preload-586629 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-586629 has been rebooted, boot id: 01cec841-3df2-4b90-92f8-483f3350f50b
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-586629 event: Registered Node test-preload-586629 in Controller
	
	
	==> dmesg <==
	[Oct 2 20:40] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000044] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006701] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.026603] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 2 20:41] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.110609] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.665987] kauditd_printk_skb: 205 callbacks suppressed
	[  +0.000313] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.075764] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [5e95e9ccf00e2fc03c33b7dcc8c9b7684a9ec37435af426cce62a7c93816032c] <==
	{"level":"info","ts":"2025-10-02T20:41:10.813705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 switched to configuration voters=(9163207290670869266)"}
	{"level":"info","ts":"2025-10-02T20:41:10.813786Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"28c39da372138ae1","local-member-id":"7f2a407b6bb4eb12","added-peer-id":"7f2a407b6bb4eb12","added-peer-peer-urls":["https://192.168.39.49:2380"]}
	{"level":"info","ts":"2025-10-02T20:41:10.813913Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"28c39da372138ae1","local-member-id":"7f2a407b6bb4eb12","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T20:41:10.813965Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T20:41:10.816751Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T20:41:10.827096Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"7f2a407b6bb4eb12","initial-advertise-peer-urls":["https://192.168.39.49:2380"],"listen-peer-urls":["https://192.168.39.49:2380"],"advertise-client-urls":["https://192.168.39.49:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.49:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T20:41:10.828013Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T20:41:10.826688Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.49:2380"}
	{"level":"info","ts":"2025-10-02T20:41:10.834761Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.49:2380"}
	{"level":"info","ts":"2025-10-02T20:41:12.173064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T20:41:12.173125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T20:41:12.173161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 received MsgPreVoteResp from 7f2a407b6bb4eb12 at term 2"}
	{"level":"info","ts":"2025-10-02T20:41:12.173173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T20:41:12.173183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 received MsgVoteResp from 7f2a407b6bb4eb12 at term 3"}
	{"level":"info","ts":"2025-10-02T20:41:12.173191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 became leader at term 3"}
	{"level":"info","ts":"2025-10-02T20:41:12.173197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7f2a407b6bb4eb12 elected leader 7f2a407b6bb4eb12 at term 3"}
	{"level":"info","ts":"2025-10-02T20:41:12.178878Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7f2a407b6bb4eb12","local-member-attributes":"{Name:test-preload-586629 ClientURLs:[https://192.168.39.49:2379]}","request-path":"/0/members/7f2a407b6bb4eb12/attributes","cluster-id":"28c39da372138ae1","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T20:41:12.178918Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T20:41:12.179062Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T20:41:12.179084Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-02T20:41:12.179097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T20:41:12.179891Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-02T20:41:12.179936Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-02T20:41:12.180555Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.49:2379"}
	{"level":"info","ts":"2025-10-02T20:41:12.180601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:41:30 up 0 min,  0 users,  load average: 0.48, 0.14, 0.05
	Linux test-preload-586629 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [f28130942c78262c05728537a31bd3076be81dc87f06e10ae8ecabb65e5c3d5f] <==
	I1002 20:41:13.410756       1 aggregator.go:171] initial CRD sync complete...
	I1002 20:41:13.410825       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 20:41:13.410844       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 20:41:13.410860       1 cache.go:39] Caches are synced for autoregister controller
	I1002 20:41:13.448145       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1002 20:41:13.489136       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1002 20:41:13.489510       1 policy_source.go:240] refreshing policies
	I1002 20:41:13.492792       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 20:41:13.494446       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 20:41:13.494529       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1002 20:41:13.494570       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 20:41:13.495267       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 20:41:13.495304       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 20:41:13.500024       1 shared_informer.go:320] Caches are synced for configmaps
	I1002 20:41:13.500462       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 20:41:13.512137       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 20:41:14.024591       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1002 20:41:14.305318       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 20:41:15.258829       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1002 20:41:15.298381       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1002 20:41:15.335777       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:41:15.345396       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 20:41:16.774017       1 controller.go:615] quota admission added evaluator for: endpoints
	I1002 20:41:16.868831       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 20:41:17.017991       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fd5682b042e990a34f17b1ea24e027d8d1f943d749a65b23870091668a0b4274] <==
	I1002 20:41:16.666030       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1002 20:41:16.667230       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1002 20:41:16.667439       1 shared_informer.go:320] Caches are synced for GC
	I1002 20:41:16.670830       1 shared_informer.go:320] Caches are synced for job
	I1002 20:41:16.675308       1 shared_informer.go:320] Caches are synced for disruption
	I1002 20:41:16.679753       1 shared_informer.go:320] Caches are synced for garbage collector
	I1002 20:41:16.679960       1 shared_informer.go:320] Caches are synced for HPA
	I1002 20:41:16.681158       1 shared_informer.go:320] Caches are synced for expand
	I1002 20:41:16.684534       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1002 20:41:16.690547       1 shared_informer.go:320] Caches are synced for taint
	I1002 20:41:16.690736       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 20:41:16.690827       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-586629"
	I1002 20:41:16.690868       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 20:41:16.715123       1 shared_informer.go:320] Caches are synced for garbage collector
	I1002 20:41:16.715199       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:41:16.715222       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:41:16.734055       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-586629"
	I1002 20:41:17.025160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="370.412739ms"
	I1002 20:41:17.026888       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.392µs"
	I1002 20:41:22.167186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="69.911µs"
	I1002 20:41:23.174564       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="8.998879ms"
	I1002 20:41:23.176937       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="59.965µs"
	I1002 20:41:23.830383       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-586629"
	I1002 20:41:23.842987       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-586629"
	I1002 20:41:26.692177       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [da925084d2102b6a1e2b1ab3501b5e44d59d4e1cd714d69d4f503c8c05ba5c6e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1002 20:41:14.824201       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1002 20:41:14.834529       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.49"]
	E1002 20:41:14.834668       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:41:14.872900       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1002 20:41:14.873099       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 20:41:14.873123       1 server_linux.go:170] "Using iptables Proxier"
	I1002 20:41:14.877707       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:41:14.878072       1 server.go:497] "Version info" version="v1.32.0"
	I1002 20:41:14.878100       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:41:14.879714       1 config.go:199] "Starting service config controller"
	I1002 20:41:14.879785       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1002 20:41:14.879813       1 config.go:105] "Starting endpoint slice config controller"
	I1002 20:41:14.879817       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1002 20:41:14.881290       1 config.go:329] "Starting node config controller"
	I1002 20:41:14.881320       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1002 20:41:14.980467       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1002 20:41:14.980481       1 shared_informer.go:320] Caches are synced for service config
	I1002 20:41:14.981443       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b1c664a432879275ceb223b86a9387c4ed3face89298d93a22adcae1f3df1cf] <==
	I1002 20:41:11.313533       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:41:13.345235       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:41:13.345304       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:41:13.345315       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:41:13.345325       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:41:13.422939       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1002 20:41:13.423141       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:41:13.427064       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:41:13.427111       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 20:41:13.427256       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1002 20:41:13.427368       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:41:13.528130       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 20:41:13 test-preload-586629 kubelet[1162]: E1002 20:41:13.605077    1162 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-586629\" already exists" pod="kube-system/kube-scheduler-test-preload-586629"
	Oct 02 20:41:13 test-preload-586629 kubelet[1162]: I1002 20:41:13.605124    1162 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-586629"
	Oct 02 20:41:13 test-preload-586629 kubelet[1162]: I1002 20:41:13.606238    1162 setters.go:602] "Node became not ready" node="test-preload-586629" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T20:41:13Z","lastTransitionTime":"2025-10-02T20:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 02 20:41:13 test-preload-586629 kubelet[1162]: E1002 20:41:13.621231    1162 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-586629\" already exists" pod="kube-system/etcd-test-preload-586629"
	Oct 02 20:41:13 test-preload-586629 kubelet[1162]: I1002 20:41:13.952430    1162 apiserver.go:52] "Watching apiserver"
	Oct 02 20:41:13 test-preload-586629 kubelet[1162]: E1002 20:41:13.957189    1162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-zjgzx" podUID="c4a593fb-cecb-4fa2-80c6-cd32fd451c95"
	Oct 02 20:41:13 test-preload-586629 kubelet[1162]: I1002 20:41:13.960704    1162 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 02 20:41:14 test-preload-586629 kubelet[1162]: E1002 20:41:14.015310    1162 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Oct 02 20:41:14 test-preload-586629 kubelet[1162]: I1002 20:41:14.016476    1162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3775d2ea-6616-40c6-873d-9a459a4d74bb-tmp\") pod \"storage-provisioner\" (UID: \"3775d2ea-6616-40c6-873d-9a459a4d74bb\") " pod="kube-system/storage-provisioner"
	Oct 02 20:41:14 test-preload-586629 kubelet[1162]: I1002 20:41:14.016532    1162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd480651-06ec-4c01-8dd8-7ee5c2f56a48-lib-modules\") pod \"kube-proxy-pf6nq\" (UID: \"dd480651-06ec-4c01-8dd8-7ee5c2f56a48\") " pod="kube-system/kube-proxy-pf6nq"
	Oct 02 20:41:14 test-preload-586629 kubelet[1162]: I1002 20:41:14.016558    1162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd480651-06ec-4c01-8dd8-7ee5c2f56a48-xtables-lock\") pod \"kube-proxy-pf6nq\" (UID: \"dd480651-06ec-4c01-8dd8-7ee5c2f56a48\") " pod="kube-system/kube-proxy-pf6nq"
	Oct 02 20:41:14 test-preload-586629 kubelet[1162]: E1002 20:41:14.017589    1162 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 20:41:14 test-preload-586629 kubelet[1162]: E1002 20:41:14.017750    1162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4a593fb-cecb-4fa2-80c6-cd32fd451c95-config-volume podName:c4a593fb-cecb-4fa2-80c6-cd32fd451c95 nodeName:}" failed. No retries permitted until 2025-10-02 20:41:14.51772439 +0000 UTC m=+5.688706843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c4a593fb-cecb-4fa2-80c6-cd32fd451c95-config-volume") pod "coredns-668d6bf9bc-zjgzx" (UID: "c4a593fb-cecb-4fa2-80c6-cd32fd451c95") : object "kube-system"/"coredns" not registered
	Oct 02 20:41:14 test-preload-586629 kubelet[1162]: E1002 20:41:14.522679    1162 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 20:41:14 test-preload-586629 kubelet[1162]: E1002 20:41:14.522753    1162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4a593fb-cecb-4fa2-80c6-cd32fd451c95-config-volume podName:c4a593fb-cecb-4fa2-80c6-cd32fd451c95 nodeName:}" failed. No retries permitted until 2025-10-02 20:41:15.522729841 +0000 UTC m=+6.693712307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c4a593fb-cecb-4fa2-80c6-cd32fd451c95-config-volume") pod "coredns-668d6bf9bc-zjgzx" (UID: "c4a593fb-cecb-4fa2-80c6-cd32fd451c95") : object "kube-system"/"coredns" not registered
	Oct 02 20:41:15 test-preload-586629 kubelet[1162]: E1002 20:41:15.532919    1162 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 20:41:15 test-preload-586629 kubelet[1162]: E1002 20:41:15.533029    1162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4a593fb-cecb-4fa2-80c6-cd32fd451c95-config-volume podName:c4a593fb-cecb-4fa2-80c6-cd32fd451c95 nodeName:}" failed. No retries permitted until 2025-10-02 20:41:17.53301468 +0000 UTC m=+8.703997136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c4a593fb-cecb-4fa2-80c6-cd32fd451c95-config-volume") pod "coredns-668d6bf9bc-zjgzx" (UID: "c4a593fb-cecb-4fa2-80c6-cd32fd451c95") : object "kube-system"/"coredns" not registered
	Oct 02 20:41:16 test-preload-586629 kubelet[1162]: E1002 20:41:16.021687    1162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-zjgzx" podUID="c4a593fb-cecb-4fa2-80c6-cd32fd451c95"
	Oct 02 20:41:17 test-preload-586629 kubelet[1162]: E1002 20:41:17.545730    1162 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 02 20:41:17 test-preload-586629 kubelet[1162]: E1002 20:41:17.546439    1162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4a593fb-cecb-4fa2-80c6-cd32fd451c95-config-volume podName:c4a593fb-cecb-4fa2-80c6-cd32fd451c95 nodeName:}" failed. No retries permitted until 2025-10-02 20:41:21.546414771 +0000 UTC m=+12.717397237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c4a593fb-cecb-4fa2-80c6-cd32fd451c95-config-volume") pod "coredns-668d6bf9bc-zjgzx" (UID: "c4a593fb-cecb-4fa2-80c6-cd32fd451c95") : object "kube-system"/"coredns" not registered
	Oct 02 20:41:18 test-preload-586629 kubelet[1162]: E1002 20:41:18.021607    1162 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-zjgzx" podUID="c4a593fb-cecb-4fa2-80c6-cd32fd451c95"
	Oct 02 20:41:19 test-preload-586629 kubelet[1162]: E1002 20:41:19.013969    1162 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437679013533376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 20:41:19 test-preload-586629 kubelet[1162]: E1002 20:41:19.013994    1162 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437679013533376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 20:41:29 test-preload-586629 kubelet[1162]: E1002 20:41:29.016194    1162 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437689015839584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 02 20:41:29 test-preload-586629 kubelet[1162]: E1002 20:41:29.016218    1162 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437689015839584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [d7b31df38f5433cb9919af8ca016ee00eeef5ed9adfa441c08a68c9d37726e48] <==
	I1002 20:41:14.752060       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-586629 -n test-preload-586629
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-586629 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-586629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-586629
--- FAIL: TestPreload (174.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (90.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-762562 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-762562 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.62270485s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-762562] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-762562" primary control-plane node in "pause-762562" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-762562" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:14.078354   47754 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:14.078778   47754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:14.078795   47754 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:14.078801   47754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:14.079152   47754 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 20:45:14.079922   47754 out.go:368] Setting JSON to false
	I1002 20:45:14.081356   47754 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5257,"bootTime":1759432657,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:45:14.081439   47754 start.go:140] virtualization: kvm guest
	I1002 20:45:14.083648   47754 out.go:179] * [pause-762562] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:45:14.085266   47754 notify.go:221] Checking for updates...
	I1002 20:45:14.085350   47754 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:45:14.088475   47754 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:45:14.090432   47754 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 20:45:14.091677   47754 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 20:45:14.092966   47754 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:45:14.098059   47754 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:45:14.100105   47754 config.go:182] Loaded profile config "pause-762562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:14.100759   47754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:45:14.100856   47754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:45:14.122892   47754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I1002 20:45:14.123533   47754 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:45:14.124193   47754 main.go:141] libmachine: Using API Version  1
	I1002 20:45:14.124275   47754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:45:14.124762   47754 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:45:14.125005   47754 main.go:141] libmachine: (pause-762562) Calling .DriverName
	I1002 20:45:14.125415   47754 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:45:14.126046   47754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:45:14.126105   47754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:45:14.142422   47754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I1002 20:45:14.142973   47754 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:45:14.143595   47754 main.go:141] libmachine: Using API Version  1
	I1002 20:45:14.143633   47754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:45:14.144148   47754 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:45:14.144410   47754 main.go:141] libmachine: (pause-762562) Calling .DriverName
	I1002 20:45:14.188206   47754 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 20:45:14.189641   47754 start.go:306] selected driver: kvm2
	I1002 20:45:14.189665   47754 start.go:936] validating driver "kvm2" against &{Name:pause-762562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-762562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:45:14.189854   47754 start.go:947] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:45:14.190403   47754 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:45:14.190547   47754 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:45:14.211555   47754 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:45:14.211612   47754 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:45:14.229444   47754 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:45:14.230432   47754 cni.go:84] Creating CNI manager for ""
	I1002 20:45:14.230506   47754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:45:14.230593   47754 start.go:350] cluster config:
	{Name:pause-762562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-762562 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:45:14.230769   47754 iso.go:125] acquiring lock: {Name:mkabc2fb4ac96edf87725f05149cf44e9a15d593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:45:14.234906   47754 out.go:179] * Starting "pause-762562" primary control-plane node in "pause-762562" cluster
	I1002 20:45:14.236473   47754 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:45:14.236556   47754 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:45:14.236576   47754 cache.go:59] Caching tarball of preloaded images
	I1002 20:45:14.236703   47754 preload.go:233] Found /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:45:14.236717   47754 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:45:14.236893   47754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/config.json ...
	I1002 20:45:14.237186   47754 start.go:361] acquireMachinesLock for pause-762562: {Name:mk83006c688982612686a8dbdd0b9c4ecd5d338c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 20:45:38.044664   47754 start.go:365] duration metric: took 23.807442905s to acquireMachinesLock for "pause-762562"
	I1002 20:45:38.044735   47754 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:45:38.044746   47754 fix.go:55] fixHost starting: 
	I1002 20:45:38.045254   47754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:45:38.045498   47754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:45:38.064340   47754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I1002 20:45:38.064858   47754 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:45:38.065323   47754 main.go:141] libmachine: Using API Version  1
	I1002 20:45:38.065350   47754 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:45:38.065691   47754 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:45:38.065912   47754 main.go:141] libmachine: (pause-762562) Calling .DriverName
	I1002 20:45:38.066090   47754 main.go:141] libmachine: (pause-762562) Calling .GetState
	I1002 20:45:38.068989   47754 fix.go:113] recreateIfNeeded on pause-762562: state=Running err=<nil>
	W1002 20:45:38.069021   47754 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:45:38.070931   47754 out.go:252] * Updating the running kvm2 "pause-762562" VM ...
	I1002 20:45:38.070980   47754 machine.go:93] provisionDockerMachine start ...
	I1002 20:45:38.071006   47754 main.go:141] libmachine: (pause-762562) Calling .DriverName
	I1002 20:45:38.071311   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:38.075336   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.076003   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:38.076051   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.076283   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHPort
	I1002 20:45:38.076473   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:38.076643   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:38.076793   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHUsername
	I1002 20:45:38.076965   47754 main.go:141] libmachine: Using SSH client type: native
	I1002 20:45:38.077294   47754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I1002 20:45:38.077310   47754 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:45:38.208513   47754 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-762562
	
	I1002 20:45:38.208541   47754 main.go:141] libmachine: (pause-762562) Calling .GetMachineName
	I1002 20:45:38.208816   47754 buildroot.go:166] provisioning hostname "pause-762562"
	I1002 20:45:38.208840   47754 main.go:141] libmachine: (pause-762562) Calling .GetMachineName
	I1002 20:45:38.209029   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:38.213337   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.213885   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:38.213939   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.214367   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHPort
	I1002 20:45:38.214641   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:38.214950   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:38.215138   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHUsername
	I1002 20:45:38.215440   47754 main.go:141] libmachine: Using SSH client type: native
	I1002 20:45:38.215718   47754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I1002 20:45:38.215755   47754 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-762562 && echo "pause-762562" | sudo tee /etc/hostname
	I1002 20:45:38.369636   47754 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-762562
	
	I1002 20:45:38.369670   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:38.373836   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.374333   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:38.374365   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.374623   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHPort
	I1002 20:45:38.374806   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:38.374959   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:38.375099   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHUsername
	I1002 20:45:38.375245   47754 main.go:141] libmachine: Using SSH client type: native
	I1002 20:45:38.375511   47754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I1002 20:45:38.375530   47754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-762562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-762562/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-762562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:45:38.504794   47754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:45:38.504828   47754 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9524/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9524/.minikube}
	I1002 20:45:38.504852   47754 buildroot.go:174] setting up certificates
	I1002 20:45:38.504866   47754 provision.go:84] configureAuth start
	I1002 20:45:38.504882   47754 main.go:141] libmachine: (pause-762562) Calling .GetMachineName
	I1002 20:45:38.505229   47754 main.go:141] libmachine: (pause-762562) Calling .GetIP
	I1002 20:45:38.509329   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.509890   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:38.509931   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.510154   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:38.513557   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.514075   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:38.514098   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.514364   47754 provision.go:143] copyHostCerts
	I1002 20:45:38.514423   47754 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem, removing ...
	I1002 20:45:38.514447   47754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem
	I1002 20:45:38.514508   47754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem (1123 bytes)
	I1002 20:45:38.514609   47754 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem, removing ...
	I1002 20:45:38.514626   47754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem
	I1002 20:45:38.514660   47754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem (1679 bytes)
	I1002 20:45:38.514779   47754 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem, removing ...
	I1002 20:45:38.514793   47754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem
	I1002 20:45:38.514831   47754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem (1082 bytes)
	I1002 20:45:38.514914   47754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem org=jenkins.pause-762562 san=[127.0.0.1 192.168.50.218 localhost minikube pause-762562]
	I1002 20:45:38.840098   47754 provision.go:177] copyRemoteCerts
	I1002 20:45:38.840162   47754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:45:38.840190   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:38.843904   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.844306   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:38.844338   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:38.844587   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHPort
	I1002 20:45:38.844838   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:38.845037   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHUsername
	I1002 20:45:38.845222   47754 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/pause-762562/id_rsa Username:docker}
	I1002 20:45:38.944138   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 20:45:38.993812   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:45:39.044118   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:45:39.087504   47754 provision.go:87] duration metric: took 582.618383ms to configureAuth
	I1002 20:45:39.087551   47754 buildroot.go:189] setting minikube options for container-runtime
	I1002 20:45:39.087890   47754 config.go:182] Loaded profile config "pause-762562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:39.088019   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:39.092188   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:39.092619   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:39.092651   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:39.092920   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHPort
	I1002 20:45:39.093150   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:39.093380   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:39.093570   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHUsername
	I1002 20:45:39.093777   47754 main.go:141] libmachine: Using SSH client type: native
	I1002 20:45:39.094011   47754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I1002 20:45:39.094034   47754 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:45:44.798342   47754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:45:44.798374   47754 machine.go:96] duration metric: took 6.727382931s to provisionDockerMachine
	I1002 20:45:44.798386   47754 start.go:294] postStartSetup for "pause-762562" (driver="kvm2")
	I1002 20:45:44.798398   47754 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:45:44.798423   47754 main.go:141] libmachine: (pause-762562) Calling .DriverName
	I1002 20:45:44.798874   47754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:45:44.798909   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:44.803075   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:44.803643   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:44.803705   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:44.803954   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHPort
	I1002 20:45:44.804180   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:44.804536   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHUsername
	I1002 20:45:44.804759   47754 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/pause-762562/id_rsa Username:docker}
	I1002 20:45:44.908704   47754 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:45:44.915532   47754 info.go:137] Remote host: Buildroot 2025.02
	I1002 20:45:44.915573   47754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/addons for local assets ...
	I1002 20:45:44.915646   47754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/files for local assets ...
	I1002 20:45:44.915789   47754 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem -> 134492.pem in /etc/ssl/certs
	I1002 20:45:44.915988   47754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:45:44.931797   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem --> /etc/ssl/certs/134492.pem (1708 bytes)
	I1002 20:45:44.975496   47754 start.go:297] duration metric: took 177.069834ms for postStartSetup
	I1002 20:45:44.975543   47754 fix.go:57] duration metric: took 6.930798513s for fixHost
	I1002 20:45:44.975567   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:44.979482   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:44.980039   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:44.980070   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:44.980296   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHPort
	I1002 20:45:44.980483   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:44.980617   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:44.980805   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHUsername
	I1002 20:45:44.981100   47754 main.go:141] libmachine: Using SSH client type: native
	I1002 20:45:44.981399   47754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I1002 20:45:44.981417   47754 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 20:45:45.116760   47754 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759437945.113033854
	
	I1002 20:45:45.116789   47754 fix.go:217] guest clock: 1759437945.113033854
	I1002 20:45:45.116799   47754 fix.go:230] Guest: 2025-10-02 20:45:45.113033854 +0000 UTC Remote: 2025-10-02 20:45:44.975548456 +0000 UTC m=+30.955126604 (delta=137.485398ms)
	I1002 20:45:45.116826   47754 fix.go:201] guest clock delta is within tolerance: 137.485398ms
	I1002 20:45:45.116833   47754 start.go:84] releasing machines lock for "pause-762562", held for 7.072136438s
	I1002 20:45:45.116862   47754 main.go:141] libmachine: (pause-762562) Calling .DriverName
	I1002 20:45:45.117271   47754 main.go:141] libmachine: (pause-762562) Calling .GetIP
	I1002 20:45:45.121662   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:45.122173   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:45.122202   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:45.122437   47754 main.go:141] libmachine: (pause-762562) Calling .DriverName
	I1002 20:45:45.123077   47754 main.go:141] libmachine: (pause-762562) Calling .DriverName
	I1002 20:45:45.123279   47754 main.go:141] libmachine: (pause-762562) Calling .DriverName
	I1002 20:45:45.123401   47754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:45:45.123452   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:45.123511   47754 ssh_runner.go:195] Run: cat /version.json
	I1002 20:45:45.123538   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHHostname
	I1002 20:45:45.127263   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:45.127539   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:45.127865   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:45.127894   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:45.127995   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHPort
	I1002 20:45:45.128145   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:45.128166   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:45.128230   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:45.128409   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHUsername
	I1002 20:45:45.128446   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHPort
	I1002 20:45:45.128596   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHKeyPath
	I1002 20:45:45.128606   47754 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/pause-762562/id_rsa Username:docker}
	I1002 20:45:45.128756   47754 main.go:141] libmachine: (pause-762562) Calling .GetSSHUsername
	I1002 20:45:45.128905   47754 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/pause-762562/id_rsa Username:docker}
	I1002 20:45:45.247978   47754 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:45.256954   47754 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:45:45.426443   47754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:45:45.438333   47754 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:45:45.438409   47754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:45:45.452455   47754 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:45:45.452485   47754 start.go:496] detecting cgroup driver to use...
	I1002 20:45:45.452545   47754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:45:45.478981   47754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:45:45.501886   47754 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:45:45.501958   47754 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:45:45.530873   47754 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:45:45.554861   47754 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:45:45.794487   47754 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:45:46.007425   47754 docker.go:234] disabling docker service ...
	I1002 20:45:46.007500   47754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:45:46.045578   47754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:45:46.064709   47754 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:45:46.275364   47754 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:45:46.494160   47754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:45:46.512483   47754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:45:46.543664   47754 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:45:46.543771   47754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:45:46.560965   47754 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:45:46.561050   47754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:45:46.577095   47754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:45:46.593476   47754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:45:46.610289   47754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:45:46.626082   47754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:45:46.646426   47754 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:45:46.664568   47754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:45:46.680494   47754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:45:46.693668   47754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:45:46.707796   47754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:45:46.912802   47754 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:45:47.428490   47754 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:45:47.428592   47754 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:45:47.438362   47754 start.go:564] Will wait 60s for crictl version
	I1002 20:45:47.438437   47754 ssh_runner.go:195] Run: which crictl
	I1002 20:45:47.443451   47754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 20:45:47.492548   47754 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 20:45:47.492632   47754 ssh_runner.go:195] Run: crio --version
	I1002 20:45:47.528135   47754 ssh_runner.go:195] Run: crio --version
	I1002 20:45:47.571330   47754 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1002 20:45:47.572609   47754 main.go:141] libmachine: (pause-762562) Calling .GetIP
	I1002 20:45:47.575902   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:47.576296   47754 main.go:141] libmachine: (pause-762562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:bf:32", ip: ""} in network mk-pause-762562: {Iface:virbr2 ExpiryTime:2025-10-02 21:44:03 +0000 UTC Type:0 Mac:52:54:00:2f:bf:32 Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:pause-762562 Clientid:01:52:54:00:2f:bf:32}
	I1002 20:45:47.576326   47754 main.go:141] libmachine: (pause-762562) DBG | domain pause-762562 has defined IP address 192.168.50.218 and MAC address 52:54:00:2f:bf:32 in network mk-pause-762562
	I1002 20:45:47.576554   47754 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1002 20:45:47.582591   47754 kubeadm.go:883] updating cluster {Name:pause-762562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-762562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:45:47.582713   47754 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:45:47.582774   47754 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:45:47.637003   47754 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:45:47.637038   47754 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:45:47.637088   47754 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:45:47.699463   47754 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:45:47.699488   47754 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:45:47.699495   47754 kubeadm.go:934] updating node { 192.168.50.218 8443 v1.34.1 crio true true} ...
	I1002 20:45:47.699588   47754 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-762562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-762562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:45:47.699677   47754 ssh_runner.go:195] Run: crio config
	I1002 20:45:47.792199   47754 cni.go:84] Creating CNI manager for ""
	I1002 20:45:47.792230   47754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:45:47.792249   47754 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:45:47.792289   47754 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-762562 NodeName:pause-762562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:45:47.792454   47754 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-762562"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.218"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:45:47.792537   47754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:45:47.823132   47754 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:45:47.823207   47754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:45:47.847269   47754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1002 20:45:47.887332   47754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:45:47.943055   47754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1002 20:45:48.042977   47754 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I1002 20:45:48.071674   47754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:45:48.486182   47754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:45:48.544996   47754 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562 for IP: 192.168.50.218
	I1002 20:45:48.545021   47754 certs.go:195] generating shared ca certs ...
	I1002 20:45:48.545043   47754 certs.go:227] acquiring lock for ca certs: {Name:mk36b72fb138c08da6f63c209f5b6ddd4af4f5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:45:48.545206   47754 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key
	I1002 20:45:48.545272   47754 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key
	I1002 20:45:48.545287   47754 certs.go:257] generating profile certs ...
	I1002 20:45:48.545394   47754 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.key
	I1002 20:45:48.545492   47754 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/apiserver.key.29ca7123
	I1002 20:45:48.545537   47754 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/proxy-client.key
	I1002 20:45:48.545687   47754 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/13449.pem (1338 bytes)
	W1002 20:45:48.545746   47754 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9524/.minikube/certs/13449_empty.pem, impossibly tiny 0 bytes
	I1002 20:45:48.545760   47754 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:45:48.545794   47754 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:45:48.545829   47754 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:45:48.545860   47754 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem (1679 bytes)
	I1002 20:45:48.545916   47754 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem (1708 bytes)
	I1002 20:45:48.546796   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:45:48.668108   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:45:48.781449   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:45:48.895562   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:45:49.067476   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:45:49.152291   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:45:49.264122   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:45:49.394853   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:45:49.468587   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem --> /usr/share/ca-certificates/134492.pem (1708 bytes)
	I1002 20:45:49.530190   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:45:49.599977   47754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/certs/13449.pem --> /usr/share/ca-certificates/13449.pem (1338 bytes)
	I1002 20:45:49.681972   47754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:45:49.763417   47754 ssh_runner.go:195] Run: openssl version
	I1002 20:45:49.779239   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134492.pem && ln -fs /usr/share/ca-certificates/134492.pem /etc/ssl/certs/134492.pem"
	I1002 20:45:49.797764   47754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134492.pem
	I1002 20:45:49.806243   47754 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 19:56 /usr/share/ca-certificates/134492.pem
	I1002 20:45:49.806309   47754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134492.pem
	I1002 20:45:49.818376   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134492.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:45:49.842263   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:45:49.882413   47754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:45:49.891396   47754 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:48 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:45:49.891470   47754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:45:49.914491   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:45:49.967344   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13449.pem && ln -fs /usr/share/ca-certificates/13449.pem /etc/ssl/certs/13449.pem"
	I1002 20:45:50.011539   47754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13449.pem
	I1002 20:45:50.023651   47754 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 19:56 /usr/share/ca-certificates/13449.pem
	I1002 20:45:50.023737   47754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13449.pem
	I1002 20:45:50.036383   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13449.pem /etc/ssl/certs/51391683.0"
	I1002 20:45:50.056705   47754 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:45:50.071845   47754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:45:50.092711   47754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:45:50.118584   47754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:45:50.140717   47754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:45:50.176187   47754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:45:50.209109   47754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:45:50.233285   47754 kubeadm.go:400] StartCluster: {Name:pause-762562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-762562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:45:50.233432   47754 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:45:50.233524   47754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:45:50.334051   47754 cri.go:89] found id: "f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b"
	I1002 20:45:50.334075   47754 cri.go:89] found id: "5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f"
	I1002 20:45:50.334080   47754 cri.go:89] found id: "009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538"
	I1002 20:45:50.334084   47754 cri.go:89] found id: "afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934"
	I1002 20:45:50.334088   47754 cri.go:89] found id: "300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427"
	I1002 20:45:50.334099   47754 cri.go:89] found id: "3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2"
	I1002 20:45:50.334103   47754 cri.go:89] found id: "f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00"
	I1002 20:45:50.334107   47754 cri.go:89] found id: "ab87ccb20dd390ee4baa6b2fd84da917c1b052163a84e07245449fd4c55845cb"
	I1002 20:45:50.334112   47754 cri.go:89] found id: "4ea71438985a2992f2bf50cd478490a6d639389f0e34268746695f217d99f8a8"
	I1002 20:45:50.334119   47754 cri.go:89] found id: "e051a92c67661eb3bc5d520f9ab4ceb8c6b7f261a9235c0c3542faf922533b89"
	I1002 20:45:50.334124   47754 cri.go:89] found id: "69466eb2938e65a72f1595e1a878d72261b434b2fad75031ba0f5f18463ba4a3"
	I1002 20:45:50.334139   47754 cri.go:89] found id: "02ece0b6778b755e478b9dcc94630ca1d08c4759da97528343a46d0b78c36f2d"
	I1002 20:45:50.334145   47754 cri.go:89] found id: ""
	I1002 20:45:50.334201   47754 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-762562 -n pause-762562
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-762562 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-762562 logs -n 25: (1.584599822s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-446943 sudo systemctl cat docker --no-pager                                                                                                              │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /etc/docker/daemon.json                                                                                                                  │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo docker system info                                                                                                                           │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl status cri-docker --all --full --no-pager                                                                                          │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl cat cri-docker --no-pager                                                                                                          │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                     │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                               │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cri-dockerd --version                                                                                                                        │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl status containerd --all --full --no-pager                                                                                          │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl cat containerd --no-pager                                                                                                          │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /lib/systemd/system/containerd.service                                                                                                   │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /etc/containerd/config.toml                                                                                                              │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo containerd config dump                                                                                                                       │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl status crio --all --full --no-pager                                                                                                │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl cat crio --no-pager                                                                                                                │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo crio config                                                                                                                                  │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ delete  │ -p cilium-446943                                                                                                                                                   │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │ 02 Oct 25 20:45 UTC │
	│ start   │ -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-787090 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-571399 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-571399    │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │                     │
	│ delete  │ -p running-upgrade-571399                                                                                                                                          │ running-upgrade-571399    │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │ 02 Oct 25 20:46 UTC │
	│ ssh     │ -p NoKubernetes-555034 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-555034       │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │                     │
	│ delete  │ -p NoKubernetes-555034                                                                                                                                             │ NoKubernetes-555034       │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │ 02 Oct 25 20:46 UTC │
	│ start   │ -p stopped-upgrade-485667 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-485667    │ jenkins │ v1.32.0 │ 02 Oct 25 20:46 UTC │                     │
	│ start   │ -p cert-expiration-491886 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                   │ cert-expiration-491886    │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:46:14
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:46:14.030906   51101 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:46:14.031236   51101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:46:14.031248   51101 out.go:374] Setting ErrFile to fd 2...
	I1002 20:46:14.031254   51101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:46:14.031584   51101 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 20:46:14.032310   51101 out.go:368] Setting JSON to false
	I1002 20:46:14.033707   51101 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5317,"bootTime":1759432657,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:46:14.033852   51101 start.go:140] virtualization: kvm guest
	I1002 20:46:14.035520   51101 out.go:179] * [cert-expiration-491886] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:46:14.036709   51101 notify.go:221] Checking for updates...
	I1002 20:46:14.036742   51101 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:46:14.037745   51101 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:46:14.039186   51101 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 20:46:14.040117   51101 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 20:46:14.041077   51101 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:46:14.042081   51101 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:46:14.043406   51101 config.go:182] Loaded profile config "kubernetes-upgrade-787090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 20:46:14.043529   51101 config.go:182] Loaded profile config "pause-762562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:14.043606   51101 config.go:182] Loaded profile config "stopped-upgrade-485667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1002 20:46:14.043688   51101 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:46:14.080757   51101 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 20:46:14.081943   51101 start.go:306] selected driver: kvm2
	I1002 20:46:14.081966   51101 start.go:936] validating driver "kvm2" against <nil>
	I1002 20:46:14.081975   51101 start.go:947] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:46:14.082823   51101 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:46:14.082903   51101 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:46:14.097308   51101 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:46:14.097333   51101 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:46:14.112940   51101 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:46:14.112979   51101 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:46:14.113258   51101 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:46:14.113275   51101 cni.go:84] Creating CNI manager for ""
	I1002 20:46:14.113316   51101 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:46:14.113320   51101 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:46:14.113361   51101 start.go:350] cluster config:
	{Name:cert-expiration-491886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-491886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:46:14.113442   51101 iso.go:125] acquiring lock: {Name:mkabc2fb4ac96edf87725f05149cf44e9a15d593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:46:14.115675   51101 out.go:179] * Starting "cert-expiration-491886" primary control-plane node in "cert-expiration-491886" cluster
	I1002 20:46:13.926866   51053 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1002 20:46:13.926912   51053 preload.go:148] Found local preload: /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1002 20:46:13.926922   51053 cache.go:56] Caching tarball of preloaded images
	I1002 20:46:13.927038   51053 preload.go:174] Found /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:46:13.927048   51053 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1002 20:46:13.927213   51053 profile.go:148] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/stopped-upgrade-485667/config.json ...
	I1002 20:46:13.927237   51053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/stopped-upgrade-485667/config.json: {Name:mkc240ce72d1c877953eba0ee5e377766a38e76e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:13.927449   51053 start.go:365] acquiring machines lock for stopped-upgrade-485667: {Name:mk83006c688982612686a8dbdd0b9c4ecd5d338c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 20:46:13.909116   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:13.909759   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:13.909790   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:13.910113   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:13.910166   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:13.910095   50867 retry.go:31] will retry after 475.797743ms: waiting for domain to come up
	I1002 20:46:14.388079   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:14.388789   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:14.388815   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:14.389229   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:14.389269   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:14.389204   50867 retry.go:31] will retry after 703.24373ms: waiting for domain to come up
	I1002 20:46:15.093597   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:15.094174   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:15.094202   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:15.094525   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:15.094546   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:15.094494   50867 retry.go:31] will retry after 1.13908592s: waiting for domain to come up
	I1002 20:46:16.235759   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:16.236365   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:16.236394   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:16.236708   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:16.236748   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:16.236651   50867 retry.go:31] will retry after 1.432989784s: waiting for domain to come up
	I1002 20:46:17.671385   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:17.672078   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:17.672102   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:17.672351   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:17.672369   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:17.672340   50867 retry.go:31] will retry after 1.709787351s: waiting for domain to come up
	I1002 20:46:14.116690   51101 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:46:14.116754   51101 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:46:14.116762   51101 cache.go:59] Caching tarball of preloaded images
	I1002 20:46:14.116890   51101 preload.go:233] Found /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:46:14.116901   51101 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:46:14.117035   51101 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/cert-expiration-491886/config.json ...
	I1002 20:46:14.117057   51101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/cert-expiration-491886/config.json: {Name:mk2dbc2d3afa0d96241dceb5776cfe9b3406d0b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:14.117325   51101 start.go:361] acquireMachinesLock for cert-expiration-491886: {Name:mk83006c688982612686a8dbdd0b9c4ecd5d338c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 20:46:18.394094   47754 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b 5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f 009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538 afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934 300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427 3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2 f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00 ab87ccb20dd390ee4baa6b2fd84da917c1b052163a84e07245449fd4c55845cb 4ea71438985a2992f2bf50cd478490a6d639389f0e34268746695f217d99f8a8 e051a92c67661eb3bc5d520f9ab4ceb8c6b7f261a9235c0c3542faf922533b89 69466eb2938e65a72f1595e1a878d72261b434b2fad75031ba0f5f18463ba4a3 02ece0b6778b755e478b9dcc94630ca1d08c4759da97528343a46d0b78c36f2d: (27.804380819s)
	W1002 20:46:18.394184   47754 kubeadm.go:648] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b 5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f 009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538 afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934 300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427 3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2 f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00 ab87ccb20dd390ee4baa6b2fd84da917c1b052163a84e07245449fd4c55845cb 4ea71438985a2992f2bf50cd478490a6d639389f0e34268746695f217d99f8a8 e051a92c67661eb3bc5d520f9ab4ceb8c6b7f261a9235c0c3542faf922533b89 69466eb2938e65a72f1595e1a878d72261b434b2fad75031ba0f5f18463ba4a3 02ece0b6778b755e478b9dcc94630ca1d08c4759da97528343a46d0b78c36f2d: Process exited with status 1
	stdout:
	f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b
	5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f
	009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538
	afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934
	300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427
	3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2
	
	stderr:
	E1002 20:46:18.389345    3632 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00\": container with ID starting with f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00 not found: ID does not exist" containerID="f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00"
	time="2025-10-02T20:46:18Z" level=fatal msg="stopping the container \"f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00\": rpc error: code = NotFound desc = could not find container \"f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00\": container with ID starting with f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00 not found: ID does not exist"
	I1002 20:46:18.394255   47754 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:46:18.430877   47754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:46:18.445039   47754 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  2 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5642 Oct  2 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Oct  2 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Oct  2 20:44 /etc/kubernetes/scheduler.conf
	
	I1002 20:46:18.445122   47754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:46:18.457407   47754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:46:18.469661   47754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:18.469740   47754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:46:18.483509   47754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:46:18.497176   47754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:18.497237   47754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:46:18.512510   47754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:46:18.526966   47754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:18.527028   47754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:46:18.540095   47754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:46:18.553313   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:18.611967   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:19.383898   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:19.384497   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:19.384526   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:19.384961   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:19.384992   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:19.384918   50867 retry.go:31] will retry after 1.893811672s: waiting for domain to come up
	I1002 20:46:21.280294   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:21.280994   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:21.281020   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:21.281320   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:21.281348   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:21.281299   50867 retry.go:31] will retry after 2.456569689s: waiting for domain to come up
	I1002 20:46:19.526029   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:19.828251   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:19.905577   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:20.004020   47754 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:46:20.004120   47754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:46:20.504473   47754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:46:21.004457   47754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:46:21.037649   47754 api_server.go:72] duration metric: took 1.033640349s to wait for apiserver process to appear ...
	I1002 20:46:21.037683   47754 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:46:21.037708   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:23.189380   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:46:23.189408   47754 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:46:23.189423   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:23.244050   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:46:23.244094   47754 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:46:23.538557   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:23.546990   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:46:23.547037   47754 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:46:24.038537   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:24.044755   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:46:24.044779   47754 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:46:24.538396   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:24.543706   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I1002 20:46:24.551760   47754 api_server.go:141] control plane version: v1.34.1
	I1002 20:46:24.551792   47754 api_server.go:131] duration metric: took 3.514101808s to wait for apiserver health ...
	I1002 20:46:24.551846   47754 cni.go:84] Creating CNI manager for ""
	I1002 20:46:24.551854   47754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:46:24.553320   47754 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:46:24.554540   47754 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:46:24.570698   47754 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 20:46:24.599794   47754 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:46:24.606330   47754 system_pods.go:59] 6 kube-system pods found
	I1002 20:46:24.606378   47754 system_pods.go:61] "coredns-66bc5c9577-9pqwk" [37e86407-39b3-4b89-a6d2-943913357f8d] Running
	I1002 20:46:24.606394   47754 system_pods.go:61] "etcd-pause-762562" [d6ff7716-87e5-456d-8635-b8f9eb552c54] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:46:24.606406   47754 system_pods.go:61] "kube-apiserver-pause-762562" [1d993398-0c38-4576-8850-b312521d95d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:46:24.606419   47754 system_pods.go:61] "kube-controller-manager-pause-762562" [4417516d-6470-4a03-96f6-85ce4fa96a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:46:24.606427   47754 system_pods.go:61] "kube-proxy-v544h" [45b79789-7110-4e85-8a30-4b58f010d5c0] Running
	I1002 20:46:24.606439   47754 system_pods.go:61] "kube-scheduler-pause-762562" [a3774909-2317-4f36-b15c-dcba06275c07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:46:24.606449   47754 system_pods.go:74] duration metric: took 6.624865ms to wait for pod list to return data ...
	I1002 20:46:24.606463   47754 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:46:24.612438   47754 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 20:46:24.612475   47754 node_conditions.go:123] node cpu capacity is 2
	I1002 20:46:24.612489   47754 node_conditions.go:105] duration metric: took 6.021325ms to run NodePressure ...
	I1002 20:46:24.612548   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:24.887522   47754 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 20:46:24.891798   47754 kubeadm.go:743] kubelet initialised
	I1002 20:46:24.891828   47754 kubeadm.go:744] duration metric: took 4.27595ms waiting for restarted kubelet to initialise ...
	I1002 20:46:24.891849   47754 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:46:24.908561   47754 ops.go:34] apiserver oom_adj: -16
	I1002 20:46:24.908589   47754 kubeadm.go:601] duration metric: took 34.456812853s to restartPrimaryControlPlane
	I1002 20:46:24.908604   47754 kubeadm.go:402] duration metric: took 34.675329785s to StartCluster
	I1002 20:46:24.908628   47754 settings.go:142] acquiring lock: {Name:mk6a3acbc81c910cfbdc018b811be13c1e438c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:24.908734   47754 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 20:46:24.909648   47754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/kubeconfig: {Name:mk0c75eb22a83f2f7ea4f564360059d4e6d21b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:24.909958   47754 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:46:24.910094   47754 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:46:24.910234   47754 config.go:182] Loaded profile config "pause-762562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:24.913854   47754 out.go:179] * Enabled addons: 
	I1002 20:46:24.913854   47754 out.go:179] * Verifying Kubernetes components...
	I1002 20:46:23.740768   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:23.741257   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:23.741283   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:23.741575   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:23.741599   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:23.741508   50867 retry.go:31] will retry after 2.567460998s: waiting for domain to come up
	I1002 20:46:26.310803   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:26.311304   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:26.311327   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:26.311680   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:26.311705   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:26.311609   50867 retry.go:31] will retry after 3.98742618s: waiting for domain to come up
	I1002 20:46:24.915251   47754 addons.go:514] duration metric: took 5.156828ms for enable addons: enabled=[]
	I1002 20:46:24.915324   47754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:25.132447   47754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:25.151207   47754 node_ready.go:35] waiting up to 6m0s for node "pause-762562" to be "Ready" ...
	I1002 20:46:25.155516   47754 node_ready.go:49] node "pause-762562" is "Ready"
	I1002 20:46:25.155554   47754 node_ready.go:38] duration metric: took 4.288982ms for node "pause-762562" to be "Ready" ...
	I1002 20:46:25.155571   47754 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:46:25.155635   47754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:46:25.183783   47754 api_server.go:72] duration metric: took 273.783101ms to wait for apiserver process to appear ...
	I1002 20:46:25.183820   47754 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:46:25.183841   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:25.188309   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I1002 20:46:25.189311   47754 api_server.go:141] control plane version: v1.34.1
	I1002 20:46:25.189343   47754 api_server.go:131] duration metric: took 5.514225ms to wait for apiserver health ...
	I1002 20:46:25.189355   47754 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:46:25.194635   47754 system_pods.go:59] 6 kube-system pods found
	I1002 20:46:25.194669   47754 system_pods.go:61] "coredns-66bc5c9577-9pqwk" [37e86407-39b3-4b89-a6d2-943913357f8d] Running
	I1002 20:46:25.194683   47754 system_pods.go:61] "etcd-pause-762562" [d6ff7716-87e5-456d-8635-b8f9eb552c54] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:46:25.194693   47754 system_pods.go:61] "kube-apiserver-pause-762562" [1d993398-0c38-4576-8850-b312521d95d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:46:25.194703   47754 system_pods.go:61] "kube-controller-manager-pause-762562" [4417516d-6470-4a03-96f6-85ce4fa96a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:46:25.194709   47754 system_pods.go:61] "kube-proxy-v544h" [45b79789-7110-4e85-8a30-4b58f010d5c0] Running
	I1002 20:46:25.194719   47754 system_pods.go:61] "kube-scheduler-pause-762562" [a3774909-2317-4f36-b15c-dcba06275c07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:46:25.194741   47754 system_pods.go:74] duration metric: took 5.378726ms to wait for pod list to return data ...
	I1002 20:46:25.194756   47754 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:46:25.198211   47754 default_sa.go:45] found service account: "default"
	I1002 20:46:25.198244   47754 default_sa.go:55] duration metric: took 3.479823ms for default service account to be created ...
	I1002 20:46:25.198256   47754 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:46:25.201680   47754 system_pods.go:86] 6 kube-system pods found
	I1002 20:46:25.201710   47754 system_pods.go:89] "coredns-66bc5c9577-9pqwk" [37e86407-39b3-4b89-a6d2-943913357f8d] Running
	I1002 20:46:25.201735   47754 system_pods.go:89] "etcd-pause-762562" [d6ff7716-87e5-456d-8635-b8f9eb552c54] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:46:25.201745   47754 system_pods.go:89] "kube-apiserver-pause-762562" [1d993398-0c38-4576-8850-b312521d95d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:46:25.201757   47754 system_pods.go:89] "kube-controller-manager-pause-762562" [4417516d-6470-4a03-96f6-85ce4fa96a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:46:25.201764   47754 system_pods.go:89] "kube-proxy-v544h" [45b79789-7110-4e85-8a30-4b58f010d5c0] Running
	I1002 20:46:25.201772   47754 system_pods.go:89] "kube-scheduler-pause-762562" [a3774909-2317-4f36-b15c-dcba06275c07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:46:25.201780   47754 system_pods.go:126] duration metric: took 3.517657ms to wait for k8s-apps to be running ...
	I1002 20:46:25.201788   47754 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:46:25.201831   47754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:46:25.220919   47754 system_svc.go:56] duration metric: took 19.119842ms WaitForService to wait for kubelet
	I1002 20:46:25.220954   47754 kubeadm.go:586] duration metric: took 310.958896ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:46:25.220973   47754 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:46:25.224539   47754 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 20:46:25.224574   47754 node_conditions.go:123] node cpu capacity is 2
	I1002 20:46:25.224591   47754 node_conditions.go:105] duration metric: took 3.610889ms to run NodePressure ...
	I1002 20:46:25.224608   47754 start.go:242] waiting for startup goroutines ...
	I1002 20:46:25.224618   47754 start.go:247] waiting for cluster config update ...
	I1002 20:46:25.224631   47754 start.go:256] writing updated cluster config ...
	I1002 20:46:25.225042   47754 ssh_runner.go:195] Run: rm -f paused
	I1002 20:46:25.231058   47754 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:46:25.231517   47754 kapi.go:59] client config for pause-762562: &rest.Config{Host:"https://192.168.50.218:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:25.234746   47754 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9pqwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:25.240558   47754 pod_ready.go:94] pod "coredns-66bc5c9577-9pqwk" is "Ready"
	I1002 20:46:25.240583   47754 pod_ready.go:86] duration metric: took 5.81326ms for pod "coredns-66bc5c9577-9pqwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:25.243280   47754 pod_ready.go:83] waiting for pod "etcd-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 20:46:27.249658   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	I1002 20:46:32.051674   51053 start.go:369] acquired machines lock for "stopped-upgrade-485667" in 18.124187556s
	I1002 20:46:32.051776   51053 start.go:93] Provisioning new machine with config: &{Name:stopped-upgrade-485667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopp
ed-upgrade-485667 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:46:32.051933   51053 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 20:46:32.055740   51053 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 20:46:32.055991   51053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:46:32.056065   51053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:46:32.072069   51053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
	I1002 20:46:32.072533   51053 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:46:32.073096   51053 main.go:141] libmachine: Using API Version  1
	I1002 20:46:32.073112   51053 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:46:32.073505   51053 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:46:32.073715   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .GetMachineName
	I1002 20:46:32.073860   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .DriverName
	I1002 20:46:32.074013   51053 start.go:159] libmachine.API.Create for "stopped-upgrade-485667" (driver="kvm2")
	I1002 20:46:32.074049   51053 client.go:168] LocalClient.Create starting
	I1002 20:46:32.074078   51053 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem
	I1002 20:46:32.074110   51053 main.go:141] libmachine: Decoding PEM data...
	I1002 20:46:32.074125   51053 main.go:141] libmachine: Parsing certificate...
	I1002 20:46:32.074171   51053 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem
	I1002 20:46:32.074191   51053 main.go:141] libmachine: Decoding PEM data...
	I1002 20:46:32.074199   51053 main.go:141] libmachine: Parsing certificate...
	I1002 20:46:32.074214   51053 main.go:141] libmachine: Running pre-create checks...
	I1002 20:46:32.074219   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .PreCreateCheck
	I1002 20:46:32.074647   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .GetConfigRaw
	I1002 20:46:32.075100   51053 main.go:141] libmachine: Creating machine...
	I1002 20:46:32.075109   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .Create
	I1002 20:46:32.075252   51053 main.go:141] libmachine: (stopped-upgrade-485667) creating domain...
	I1002 20:46:32.075266   51053 main.go:141] libmachine: (stopped-upgrade-485667) creating network...
	I1002 20:46:32.076857   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | found existing default network
	I1002 20:46:32.077041   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | <network connections='2'>
	I1002 20:46:32.077058   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <name>default</name>
	I1002 20:46:32.077069   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 20:46:32.077084   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <forward mode='nat'>
	I1002 20:46:32.077095   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <nat>
	I1002 20:46:32.077104   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <port start='1024' end='65535'/>
	I1002 20:46:32.077111   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </nat>
	I1002 20:46:32.077116   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </forward>
	I1002 20:46:32.077122   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 20:46:32.077132   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 20:46:32.077139   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 20:46:32.077147   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <dhcp>
	I1002 20:46:32.077165   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 20:46:32.077182   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </dhcp>
	I1002 20:46:32.077197   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </ip>
	I1002 20:46:32.077205   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | </network>
	I1002 20:46:32.077217   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:32.077980   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:32.077836   51272 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123900}
	I1002 20:46:32.078065   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | defining private network:
	I1002 20:46:32.078091   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:32.078101   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | <network>
	I1002 20:46:32.078109   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <name>mk-stopped-upgrade-485667</name>
	I1002 20:46:32.078118   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <dns enable='no'/>
	I1002 20:46:32.078126   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 20:46:32.078136   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <dhcp>
	I1002 20:46:32.078143   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 20:46:32.078150   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </dhcp>
	I1002 20:46:32.078157   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </ip>
	I1002 20:46:32.078180   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | </network>
	I1002 20:46:32.078192   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:32.084108   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | creating private network mk-stopped-upgrade-485667 192.168.39.0/24...
	I1002 20:46:32.166658   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | private network mk-stopped-upgrade-485667 192.168.39.0/24 created
	I1002 20:46:32.166965   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | <network>
	I1002 20:46:32.166975   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <name>mk-stopped-upgrade-485667</name>
	I1002 20:46:32.166987   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <uuid>21a721e2-d788-49c0-88c5-b81f0f9ffff9</uuid>
	I1002 20:46:32.167006   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting up store path in /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667 ...
	I1002 20:46:32.167015   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I1002 20:46:32.167027   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <mac address='52:54:00:00:70:d3'/>
	I1002 20:46:32.167047   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <dns enable='no'/>
	I1002 20:46:32.167063   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 20:46:32.167069   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <dhcp>
	I1002 20:46:32.167078   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 20:46:32.167083   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </dhcp>
	I1002 20:46:32.167102   51053 main.go:141] libmachine: (stopped-upgrade-485667) building disk image from file:///home/jenkins/minikube-integration/21683-9524/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1002 20:46:32.167130   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </ip>
	I1002 20:46:32.167163   51053 main.go:141] libmachine: (stopped-upgrade-485667) Downloading /home/jenkins/minikube-integration/21683-9524/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21683-9524/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1002 20:46:32.167183   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | </network>
	I1002 20:46:32.167198   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:32.167208   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:32.166963   51272 common.go:147] Making disk image using store path: /home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 20:46:32.377703   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:32.377548   51272 common.go:154] Creating ssh key: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/id_rsa...
	I1002 20:46:33.077050   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:33.076913   51272 common.go:160] Creating raw disk image: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/stopped-upgrade-485667.rawdisk...
	I1002 20:46:33.077080   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | Writing magic tar header
	I1002 20:46:33.077095   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | Writing SSH key tar header
	I1002 20:46:33.077103   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:33.077043   51272 common.go:174] Fixing permissions on /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667 ...
	I1002 20:46:33.077237   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667
	I1002 20:46:33.077262   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667 (perms=drwx------)
	I1002 20:46:33.077283   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524/.minikube/machines
	I1002 20:46:33.077298   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 20:46:33.077309   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524
	I1002 20:46:33.077320   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 20:46:33.077328   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins
	I1002 20:46:33.077341   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration/21683-9524/.minikube/machines (perms=drwxr-xr-x)
	I1002 20:46:33.077351   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home
	I1002 20:46:33.077371   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | skipping /home - not owner
	I1002 20:46:33.077383   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration/21683-9524/.minikube (perms=drwxr-xr-x)
	I1002 20:46:33.077396   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration/21683-9524 (perms=drwxrwxr-x)
	I1002 20:46:33.077406   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 20:46:33.077427   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 20:46:33.077438   51053 main.go:141] libmachine: (stopped-upgrade-485667) defining domain...
	I1002 20:46:33.078675   51053 main.go:141] libmachine: (stopped-upgrade-485667) defining domain using XML: 
	I1002 20:46:33.078712   51053 main.go:141] libmachine: (stopped-upgrade-485667) <domain type='kvm'>
	I1002 20:46:33.078740   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <name>stopped-upgrade-485667</name>
	I1002 20:46:33.078755   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <memory unit='MiB'>3072</memory>
	I1002 20:46:33.078764   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <vcpu>2</vcpu>
	I1002 20:46:33.078771   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <features>
	I1002 20:46:33.078779   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <acpi/>
	I1002 20:46:33.078794   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <apic/>
	I1002 20:46:33.078804   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <pae/>
	I1002 20:46:33.078813   51053 main.go:141] libmachine: (stopped-upgrade-485667)   </features>
	I1002 20:46:33.078824   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <cpu mode='host-passthrough'>
	I1002 20:46:33.078840   51053 main.go:141] libmachine: (stopped-upgrade-485667)   </cpu>
	I1002 20:46:33.078850   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <os>
	I1002 20:46:33.078859   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <type>hvm</type>
	I1002 20:46:33.078870   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <boot dev='cdrom'/>
	I1002 20:46:33.078878   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <boot dev='hd'/>
	I1002 20:46:33.078887   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <bootmenu enable='no'/>
	I1002 20:46:33.078893   51053 main.go:141] libmachine: (stopped-upgrade-485667)   </os>
	I1002 20:46:33.078902   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <devices>
	I1002 20:46:33.078915   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <disk type='file' device='cdrom'>
	I1002 20:46:33.078928   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/boot2docker.iso'/>
	I1002 20:46:33.078937   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <target dev='hdc' bus='scsi'/>
	I1002 20:46:33.078946   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <readonly/>
	I1002 20:46:33.078955   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </disk>
	I1002 20:46:33.078964   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <disk type='file' device='disk'>
	I1002 20:46:33.079014   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 20:46:33.079044   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/stopped-upgrade-485667.rawdisk'/>
	I1002 20:46:33.079054   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <target dev='hda' bus='virtio'/>
	I1002 20:46:33.079061   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </disk>
	I1002 20:46:33.079070   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <interface type='network'>
	I1002 20:46:33.079079   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <source network='mk-stopped-upgrade-485667'/>
	I1002 20:46:33.079092   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <model type='virtio'/>
	I1002 20:46:33.079099   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </interface>
	I1002 20:46:33.079122   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <interface type='network'>
	I1002 20:46:33.079136   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <source network='default'/>
	I1002 20:46:33.079142   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <model type='virtio'/>
	I1002 20:46:33.079157   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </interface>
	I1002 20:46:33.079163   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <serial type='pty'>
	I1002 20:46:33.079170   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <target port='0'/>
	I1002 20:46:33.079176   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </serial>
	I1002 20:46:33.079181   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <console type='pty'>
	I1002 20:46:33.079187   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <target type='serial' port='0'/>
	I1002 20:46:33.079191   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </console>
	I1002 20:46:33.079200   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <rng model='virtio'>
	I1002 20:46:33.079206   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <backend model='random'>/dev/random</backend>
	I1002 20:46:33.079211   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </rng>
	I1002 20:46:33.079215   51053 main.go:141] libmachine: (stopped-upgrade-485667)   </devices>
	I1002 20:46:33.079220   51053 main.go:141] libmachine: (stopped-upgrade-485667) </domain>
	I1002 20:46:33.079224   51053 main.go:141] libmachine: (stopped-upgrade-485667) 
	I1002 20:46:33.083800   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:03:17:38 in network default
	I1002 20:46:33.084615   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:33.084666   51053 main.go:141] libmachine: (stopped-upgrade-485667) starting domain...
	I1002 20:46:33.084686   51053 main.go:141] libmachine: (stopped-upgrade-485667) ensuring networks are active...
	I1002 20:46:33.085689   51053 main.go:141] libmachine: (stopped-upgrade-485667) Ensuring network default is active
	I1002 20:46:33.086060   51053 main.go:141] libmachine: (stopped-upgrade-485667) Ensuring network mk-stopped-upgrade-485667 is active
	I1002 20:46:33.086887   51053 main.go:141] libmachine: (stopped-upgrade-485667) getting domain XML...
	I1002 20:46:33.088040   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | starting domain XML:
	I1002 20:46:33.088056   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | <domain type='kvm'>
	I1002 20:46:33.088067   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <name>stopped-upgrade-485667</name>
	I1002 20:46:33.088074   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <uuid>83e30974-a6f6-45a8-b9b1-32a27433eab3</uuid>
	I1002 20:46:33.088083   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <memory unit='KiB'>3145728</memory>
	I1002 20:46:33.088090   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1002 20:46:33.088099   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 20:46:33.088105   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <os>
	I1002 20:46:33.088120   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 20:46:33.088126   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <boot dev='cdrom'/>
	I1002 20:46:33.088136   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <boot dev='hd'/>
	I1002 20:46:33.088144   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <bootmenu enable='no'/>
	I1002 20:46:33.088153   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </os>
	I1002 20:46:33.088161   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <features>
	I1002 20:46:33.088190   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <acpi/>
	I1002 20:46:33.088206   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <apic/>
	I1002 20:46:33.088216   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <pae/>
	I1002 20:46:33.088231   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </features>
	I1002 20:46:33.088253   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 20:46:33.088261   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <clock offset='utc'/>
	I1002 20:46:33.088272   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 20:46:33.088286   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <on_reboot>restart</on_reboot>
	I1002 20:46:33.088298   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <on_crash>destroy</on_crash>
	I1002 20:46:33.088311   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <devices>
	I1002 20:46:33.088323   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 20:46:33.088332   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <disk type='file' device='cdrom'>
	I1002 20:46:33.088345   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <driver name='qemu' type='raw'/>
	I1002 20:46:33.088364   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/boot2docker.iso'/>
	I1002 20:46:33.088375   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 20:46:33.088384   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <readonly/>
	I1002 20:46:33.088396   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 20:46:33.088404   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </disk>
	I1002 20:46:33.088415   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <disk type='file' device='disk'>
	I1002 20:46:33.088430   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 20:46:33.088456   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/stopped-upgrade-485667.rawdisk'/>
	I1002 20:46:33.088465   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <target dev='hda' bus='virtio'/>
	I1002 20:46:33.088484   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 20:46:33.088491   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </disk>
	I1002 20:46:33.088518   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 20:46:33.088535   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 20:46:33.088546   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </controller>
	I1002 20:46:33.088553   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 20:46:33.088562   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 20:46:33.088576   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 20:46:33.088591   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </controller>
	I1002 20:46:33.088606   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <interface type='network'>
	I1002 20:46:33.088617   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <mac address='52:54:00:31:36:40'/>
	I1002 20:46:33.088625   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <source network='mk-stopped-upgrade-485667'/>
	I1002 20:46:33.088638   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <model type='virtio'/>
	I1002 20:46:33.088646   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 20:46:33.088651   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </interface>
	I1002 20:46:33.088656   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <interface type='network'>
	I1002 20:46:33.088662   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <mac address='52:54:00:03:17:38'/>
	I1002 20:46:33.088667   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <source network='default'/>
	I1002 20:46:33.088691   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <model type='virtio'/>
	I1002 20:46:33.088703   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 20:46:33.088709   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </interface>
	I1002 20:46:33.088714   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <serial type='pty'>
	I1002 20:46:33.088733   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <target type='isa-serial' port='0'>
	I1002 20:46:33.088742   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |         <model name='isa-serial'/>
	I1002 20:46:33.088765   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       </target>
	I1002 20:46:33.088780   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </serial>
	I1002 20:46:33.088791   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <console type='pty'>
	I1002 20:46:33.088811   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <target type='serial' port='0'/>
	I1002 20:46:33.088820   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </console>
	I1002 20:46:33.088829   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <input type='mouse' bus='ps2'/>
	I1002 20:46:33.088839   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 20:46:33.088852   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <audio id='1' type='none'/>
	I1002 20:46:33.088863   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <memballoon model='virtio'>
	I1002 20:46:33.088874   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 20:46:33.088883   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </memballoon>
	I1002 20:46:33.088892   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <rng model='virtio'>
	I1002 20:46:33.088903   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <backend model='random'>/dev/random</backend>
	I1002 20:46:33.088914   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 20:46:33.088922   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </rng>
	I1002 20:46:33.088933   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </devices>
	I1002 20:46:33.088940   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | </domain>
	I1002 20:46:33.088954   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:30.303932   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.304688   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) found domain IP: 192.168.61.2
	I1002 20:46:30.304739   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has current primary IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.304749   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) reserving static IP address...
	I1002 20:46:30.305112   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-787090", mac: "52:54:00:13:32:89", ip: "192.168.61.2"} in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.505181   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) reserved static IP address 192.168.61.2 for domain kubernetes-upgrade-787090
	I1002 20:46:30.505216   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | Getting to WaitForSSH function...
	I1002 20:46:30.505277   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) waiting for SSH...
	I1002 20:46:30.508031   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.508401   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.508432   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.508571   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | Using SSH client type: external
	I1002 20:46:30.508613   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa (-rw-------)
	I1002 20:46:30.508660   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 20:46:30.508678   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | About to run SSH command:
	I1002 20:46:30.508699   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | exit 0
	I1002 20:46:30.639766   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | SSH cmd err, output: <nil>: 
	I1002 20:46:30.640169   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) domain creation complete
	I1002 20:46:30.640480   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetConfigRaw
	I1002 20:46:30.641058   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:30.641262   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:30.641438   50605 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 20:46:30.641454   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetState
	I1002 20:46:30.642915   50605 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 20:46:30.642929   50605 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 20:46:30.642935   50605 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 20:46:30.642940   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:30.645307   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.645663   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.645686   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.645856   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:30.646045   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.646210   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.646347   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:30.646498   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:30.646739   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:30.646757   50605 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 20:46:30.745531   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:46:30.745561   50605 main.go:141] libmachine: Detecting the provisioner...
	I1002 20:46:30.745571   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:30.749173   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.749554   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.749593   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.749736   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:30.749927   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.750109   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.750300   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:30.750464   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:30.750679   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:30.750693   50605 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 20:46:30.852488   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 20:46:30.852556   50605 main.go:141] libmachine: found compatible host: buildroot
	I1002 20:46:30.852563   50605 main.go:141] libmachine: Provisioning with buildroot...
	I1002 20:46:30.852579   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetMachineName
	I1002 20:46:30.852881   50605 buildroot.go:166] provisioning hostname "kubernetes-upgrade-787090"
	I1002 20:46:30.852908   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetMachineName
	I1002 20:46:30.853115   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:30.856458   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.856892   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.856921   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.857260   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:30.857475   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.857661   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.857842   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:30.858034   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:30.858247   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:30.858260   50605 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-787090 && echo "kubernetes-upgrade-787090" | sudo tee /etc/hostname
	I1002 20:46:30.978376   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-787090
	
	I1002 20:46:30.978406   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:30.981623   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.982065   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.982101   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.982291   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:30.982596   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.982813   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.982973   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:30.983146   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:30.983370   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:30.983388   50605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-787090' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-787090/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-787090' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:46:31.096257   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:46:31.096284   50605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9524/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9524/.minikube}
	I1002 20:46:31.096308   50605 buildroot.go:174] setting up certificates
	I1002 20:46:31.096319   50605 provision.go:84] configureAuth start
	I1002 20:46:31.096327   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetMachineName
	I1002 20:46:31.096638   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetIP
	I1002 20:46:31.099410   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.099816   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.099845   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.100034   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.103598   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.103973   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.104004   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.104216   50605 provision.go:143] copyHostCerts
	I1002 20:46:31.104286   50605 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem, removing ...
	I1002 20:46:31.104303   50605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem
	I1002 20:46:31.104359   50605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem (1082 bytes)
	I1002 20:46:31.104460   50605 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem, removing ...
	I1002 20:46:31.104468   50605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem
	I1002 20:46:31.104490   50605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem (1123 bytes)
	I1002 20:46:31.104556   50605 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem, removing ...
	I1002 20:46:31.104563   50605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem
	I1002 20:46:31.104581   50605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem (1679 bytes)
	I1002 20:46:31.104635   50605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-787090 san=[127.0.0.1 192.168.61.2 kubernetes-upgrade-787090 localhost minikube]
	I1002 20:46:31.356494   50605 provision.go:177] copyRemoteCerts
	I1002 20:46:31.356550   50605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:46:31.356574   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.359489   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.359894   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.359926   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.360097   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:31.360286   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.360418   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:31.360571   50605 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa Username:docker}
	I1002 20:46:31.445836   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:46:31.478892   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 20:46:31.512757   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:46:31.545533   50605 provision.go:87] duration metric: took 449.202242ms to configureAuth
	I1002 20:46:31.545561   50605 buildroot.go:189] setting minikube options for container-runtime
	I1002 20:46:31.545767   50605 config.go:182] Loaded profile config "kubernetes-upgrade-787090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 20:46:31.545844   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.549071   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.549523   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.549556   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.549799   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:31.550020   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.550209   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.550331   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:31.550524   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:31.550753   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:31.550774   50605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:46:31.797669   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:46:31.797697   50605 main.go:141] libmachine: Checking connection to Docker...
	I1002 20:46:31.797707   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetURL
	I1002 20:46:31.799126   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | using libvirt version 8000000
	I1002 20:46:31.802033   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.802372   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.802402   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.802582   50605 main.go:141] libmachine: Docker is up and running!
	I1002 20:46:31.802597   50605 main.go:141] libmachine: Reticulating splines...
	I1002 20:46:31.802604   50605 client.go:171] duration metric: took 21.34466823s to LocalClient.Create
	I1002 20:46:31.802626   50605 start.go:168] duration metric: took 21.344742296s to libmachine.API.Create "kubernetes-upgrade-787090"
	I1002 20:46:31.802636   50605 start.go:294] postStartSetup for "kubernetes-upgrade-787090" (driver="kvm2")
	I1002 20:46:31.802644   50605 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:46:31.802668   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:31.802899   50605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:46:31.802921   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.805293   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.805571   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.805620   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.805761   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:31.805935   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.806104   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:31.806270   50605 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa Username:docker}
	I1002 20:46:31.889594   50605 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:46:31.894916   50605 info.go:137] Remote host: Buildroot 2025.02
	I1002 20:46:31.894943   50605 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/addons for local assets ...
	I1002 20:46:31.895022   50605 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/files for local assets ...
	I1002 20:46:31.895109   50605 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem -> 134492.pem in /etc/ssl/certs
	I1002 20:46:31.895231   50605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:46:31.907489   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem --> /etc/ssl/certs/134492.pem (1708 bytes)
	I1002 20:46:31.938717   50605 start.go:297] duration metric: took 136.067235ms for postStartSetup
	I1002 20:46:31.938796   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetConfigRaw
	I1002 20:46:31.939513   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetIP
	I1002 20:46:31.942363   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.942746   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.942781   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.943000   50605 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/config.json ...
	I1002 20:46:31.943215   50605 start.go:129] duration metric: took 21.507355283s to createHost
	I1002 20:46:31.943238   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.945565   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.945938   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.945974   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.946101   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:31.946305   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.946490   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.946643   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:31.946838   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:31.947039   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:31.947049   50605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 20:46:32.051461   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759437992.017112160
	
	I1002 20:46:32.051486   50605 fix.go:217] guest clock: 1759437992.017112160
	I1002 20:46:32.051497   50605 fix.go:230] Guest: 2025-10-02 20:46:32.01711216 +0000 UTC Remote: 2025-10-02 20:46:31.943227242 +0000 UTC m=+38.257701484 (delta=73.884918ms)
	I1002 20:46:32.051545   50605 fix.go:201] guest clock delta is within tolerance: 73.884918ms
	I1002 20:46:32.051551   50605 start.go:84] releasing machines lock for "kubernetes-upgrade-787090", held for 21.615891533s
	I1002 20:46:32.051581   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:32.051884   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetIP
	I1002 20:46:32.055232   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.055670   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:32.055702   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.055930   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:32.056453   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:32.056659   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:32.056783   50605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:46:32.056828   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:32.056899   50605 ssh_runner.go:195] Run: cat /version.json
	I1002 20:46:32.056928   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:32.060329   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.060420   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.060779   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:32.060815   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.060847   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:32.060864   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.061016   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:32.061158   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:32.061261   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:32.061330   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:32.061402   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:32.061473   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:32.061537   50605 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa Username:docker}
	I1002 20:46:32.061580   50605 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa Username:docker}
	I1002 20:46:32.185063   50605 ssh_runner.go:195] Run: systemctl --version
	I1002 20:46:32.192258   50605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:46:32.373188   50605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:46:32.381268   50605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:46:32.381340   50605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:46:32.404890   50605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:46:32.404919   50605 start.go:496] detecting cgroup driver to use...
	I1002 20:46:32.404982   50605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:46:32.425834   50605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:46:32.445335   50605 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:46:32.445450   50605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:46:32.465844   50605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:46:32.487313   50605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:46:32.655116   50605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:46:32.862992   50605 docker.go:234] disabling docker service ...
	I1002 20:46:32.863056   50605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:46:32.884305   50605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:46:32.901142   50605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:46:33.070505   50605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:46:33.231851   50605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:46:33.250500   50605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:46:33.275416   50605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 20:46:33.275479   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.289196   50605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:46:33.289300   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.303326   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.317345   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.331450   50605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:46:33.346446   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.360427   50605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.384390   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.399181   50605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:46:33.412373   50605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 20:46:33.412450   50605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 20:46:33.434962   50605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:46:33.448505   50605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:33.599553   50605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:46:33.721806   50605 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:46:33.721887   50605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:46:33.727813   50605 start.go:564] Will wait 60s for crictl version
	I1002 20:46:33.727884   50605 ssh_runner.go:195] Run: which crictl
	I1002 20:46:33.732522   50605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 20:46:33.787517   50605 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 20:46:33.787622   50605 ssh_runner.go:195] Run: crio --version
	I1002 20:46:33.821256   50605 ssh_runner.go:195] Run: crio --version
	I1002 20:46:33.856834   50605 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.29.1 ...
	W1002 20:46:29.250859   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	W1002 20:46:31.751256   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	W1002 20:46:33.755241   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	W1002 20:46:36.251127   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	I1002 20:46:36.749958   47754 pod_ready.go:94] pod "etcd-pause-762562" is "Ready"
	I1002 20:46:36.749985   47754 pod_ready.go:86] duration metric: took 11.506680188s for pod "etcd-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.753552   47754 pod_ready.go:83] waiting for pod "kube-apiserver-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.759584   47754 pod_ready.go:94] pod "kube-apiserver-pause-762562" is "Ready"
	I1002 20:46:36.759619   47754 pod_ready.go:86] duration metric: took 6.033444ms for pod "kube-apiserver-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.763548   47754 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.770155   47754 pod_ready.go:94] pod "kube-controller-manager-pause-762562" is "Ready"
	I1002 20:46:36.770178   47754 pod_ready.go:86] duration metric: took 6.599923ms for pod "kube-controller-manager-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.772741   47754 pod_ready.go:83] waiting for pod "kube-proxy-v544h" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.947495   47754 pod_ready.go:94] pod "kube-proxy-v544h" is "Ready"
	I1002 20:46:36.947533   47754 pod_ready.go:86] duration metric: took 174.763745ms for pod "kube-proxy-v544h" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:37.147940   47754 pod_ready.go:83] waiting for pod "kube-scheduler-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:37.548851   47754 pod_ready.go:94] pod "kube-scheduler-pause-762562" is "Ready"
	I1002 20:46:37.548884   47754 pod_ready.go:86] duration metric: took 400.907691ms for pod "kube-scheduler-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:37.548898   47754 pod_ready.go:40] duration metric: took 12.317805308s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:46:37.600605   47754 start.go:627] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1002 20:46:37.605714   47754 out.go:179] * Done! kubectl is now configured to use "pause-762562" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.472424102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437998472392664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97e29545-c799-40be-99be-4ccf2e4d1143 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.473083167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26ceaf4c-f0b4-4b1b-a28b-94f693b8e29d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.473254550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26ceaf4c-f0b4-4b1b-a28b-94f693b8e29d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.473646411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759437980630572818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759437980621452909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759437980620710836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca602dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759437978215562378,Labels:map[string]string{io.kubernetes.container.name: co
redns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2
,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759437973203033751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759437971173387418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca6
02dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759437949938105336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759437948766969407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759437948580729552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759437948562103797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.has
h: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759437948519122158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759437948165817497,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26ceaf4c-f0b4-4b1b-a28b-94f693b8e29d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.528410506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad8e666d-fdef-4aaa-ae1f-f8cedbb7ab69 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.528491013Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad8e666d-fdef-4aaa-ae1f-f8cedbb7ab69 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.530741271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33c0190f-aeaa-4667-a6c7-adb1f334adcd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.532001628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437998531977587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33c0190f-aeaa-4667-a6c7-adb1f334adcd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.533532876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff4f81a9-37d4-4669-a88f-e99298495c0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.533642718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff4f81a9-37d4-4669-a88f-e99298495c0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.533886872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759437980630572818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759437980621452909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759437980620710836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca602dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759437978215562378,Labels:map[string]string{io.kubernetes.container.name: co
redns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2
,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759437973203033751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759437971173387418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca6
02dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759437949938105336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759437948766969407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759437948580729552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759437948562103797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.has
h: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759437948519122158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759437948165817497,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff4f81a9-37d4-4669-a88f-e99298495c0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.588830100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f3e3140-22e1-4034-a2de-40bc55cf2596 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.588957046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f3e3140-22e1-4034-a2de-40bc55cf2596 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.590928308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c405422-756d-40ac-a93c-60082897f4a9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.591697396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437998591665671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c405422-756d-40ac-a93c-60082897f4a9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.592572667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2118961-8084-4025-b685-0e2b2a785e89 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.592820584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2118961-8084-4025-b685-0e2b2a785e89 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.593505641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759437980630572818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759437980621452909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759437980620710836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca602dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759437978215562378,Labels:map[string]string{io.kubernetes.container.name: co
redns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2
,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759437973203033751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759437971173387418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca6
02dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759437949938105336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759437948766969407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759437948580729552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759437948562103797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.has
h: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759437948519122158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759437948165817497,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2118961-8084-4025-b685-0e2b2a785e89 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.649998958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=183ef5ed-9281-41cb-ab3c-9e091048d5f7 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.650086880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=183ef5ed-9281-41cb-ab3c-9e091048d5f7 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.651895921Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d23047b4-6fad-4583-8944-031c8745e8d0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.652406562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759437998652381620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d23047b4-6fad-4583-8944-031c8745e8d0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.653044979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=408f9972-36a1-4364-8d47-954ecf822e47 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.653101081Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=408f9972-36a1-4364-8d47-954ecf822e47 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:38 pause-762562 crio[2816]: time="2025-10-02 20:46:38.653431369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759437980630572818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759437980621452909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759437980620710836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca602dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759437978215562378,Labels:map[string]string{io.kubernetes.container.name: co
redns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2
,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759437973203033751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759437971173387418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca6
02dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759437949938105336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759437948766969407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759437948580729552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759437948562103797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.has
h: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759437948519122158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759437948165817497,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=408f9972-36a1-4364-8d47-954ecf822e47 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f35539b42c1c2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   18 seconds ago      Running             kube-apiserver            2                   fabda5ffa77a2       kube-apiserver-pause-762562
	d30719ae7ea42       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 seconds ago      Running             etcd                      2                   ef8699ca9248b       etcd-pause-762562
	1db5c4e94a2ca       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   18 seconds ago      Running             kube-controller-manager   2                   79084517cb61a       kube-controller-manager-pause-762562
	415b84c3bfa0e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   20 seconds ago      Running             coredns                   2                   71ac4f78242c2       coredns-66bc5c9577-9pqwk
	1e5f90d35eacc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                2                   9228cd5ff0c1e       kube-proxy-v544h
	acc505f66733b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   27 seconds ago      Running             kube-scheduler            2                   a1519324ed9a7       kube-scheduler-pause-762562
	f9a8dac79d64a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   48 seconds ago      Exited              coredns                   1                   71ac4f78242c2       coredns-66bc5c9577-9pqwk
	5997f05643c28       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   49 seconds ago      Exited              kube-proxy                1                   9228cd5ff0c1e       kube-proxy-v544h
	009491ddb7bb3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   50 seconds ago      Exited              kube-scheduler            1                   a1519324ed9a7       kube-scheduler-pause-762562
	afcb036ebb5bf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   50 seconds ago      Exited              etcd                      1                   ef8699ca9248b       etcd-pause-762562
	300685cf73111       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   50 seconds ago      Exited              kube-apiserver            1                   fabda5ffa77a2       kube-apiserver-pause-762562
	3ba8febf0b679       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   50 seconds ago      Exited              kube-controller-manager   1                   79084517cb61a       kube-controller-manager-pause-762562
	
	
	==> coredns [415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55967 - 34154 "HINFO IN 5038972822532882445.8099331229069857526. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026342327s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45495 - 21436 "HINFO IN 7615697003491596128.4513909049238198017. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032593539s
	
	
	==> describe nodes <==
	Name:               pause-762562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-762562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=pause-762562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_44_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:44:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-762562
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:46:23 +0000   Thu, 02 Oct 2025 20:44:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:46:23 +0000   Thu, 02 Oct 2025 20:44:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:46:23 +0000   Thu, 02 Oct 2025 20:44:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:46:23 +0000   Thu, 02 Oct 2025 20:44:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.218
	  Hostname:    pause-762562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9e3bed246a94647aa8538c174b56581
	  System UUID:                c9e3bed2-46a9-4647-aa85-38c174b56581
	  Boot ID:                    9d0457cf-5e5d-4040-abdd-4137d21878a4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9pqwk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m6s
	  kube-system                 etcd-pause-762562                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m13s
	  kube-system                 kube-apiserver-pause-762562             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-pause-762562    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-v544h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-scheduler-pause-762562             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m4s               kube-proxy       
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 45s                kube-proxy       
	  Normal  Starting                 2m11s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m11s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m11s              kubelet          Node pause-762562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s              kubelet          Node pause-762562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s              kubelet          Node pause-762562 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m10s              kubelet          Node pause-762562 status is now: NodeReady
	  Normal  RegisteredNode           2m7s               node-controller  Node pause-762562 event: Registered Node pause-762562 in Controller
	  Normal  RegisteredNode           42s                node-controller  Node pause-762562 event: Registered Node pause-762562 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node pause-762562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node pause-762562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node pause-762562 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-762562 event: Registered Node pause-762562 in Controller
	
	
	==> dmesg <==
	[Oct 2 20:43] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002949] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Oct 2 20:44] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082952] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.125217] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.124105] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.147957] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.813558] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.304317] kauditd_printk_skb: 210 callbacks suppressed
	[Oct 2 20:45] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.028446] kauditd_printk_skb: 319 callbacks suppressed
	[Oct 2 20:46] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.530494] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.127975] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.422899] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934] <==
	{"level":"warn","ts":"2025-10-02T20:45:52.128677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.149387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.184116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.205570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.228474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.243260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.346198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32872","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:46:01.007352Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T20:46:01.007458Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-762562","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.218:2380"],"advertise-client-urls":["https://192.168.50.218:2379"]}
	{"level":"error","ts":"2025-10-02T20:46:01.007595Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:46:08.010698Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:46:08.010804Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:46:08.010827Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4bfeef2bb38c2b5","current-leader-member-id":"d4bfeef2bb38c2b5"}
	{"level":"info","ts":"2025-10-02T20:46:08.010966Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T20:46:08.010978Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T20:46:08.011878Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:46:08.011943Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:46:08.011958Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T20:46:08.012007Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.218:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:46:08.012036Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.218:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:46:08.012044Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.218:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:46:08.017932Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.218:2380"}
	{"level":"error","ts":"2025-10-02T20:46:08.018008Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.218:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:46:08.018033Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.218:2380"}
	{"level":"info","ts":"2025-10-02T20:46:08.018040Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-762562","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.218:2380"],"advertise-client-urls":["https://192.168.50.218:2379"]}
	
	
	==> etcd [d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0] <==
	{"level":"warn","ts":"2025-10-02T20:46:22.279113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.285961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.296259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.313379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.323721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.329236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.338463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.346736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.364088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.372637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.392404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.397103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.405018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.420862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.435123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.436105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.444765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.462863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.465834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.487647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.501315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.509258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.522411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.559470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.608918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:46:39 up 2 min,  0 users,  load average: 1.26, 0.65, 0.26
	Linux pause-762562 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427] <==
	W1002 20:46:16.775395       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:16.794440       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:16.825664       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:16.828250       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.048469       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.060341       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.068082       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.109504       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.144419       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.179244       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.194097       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.213974       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.226688       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.249296       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.331118       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.379905       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.534324       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.539921       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.590525       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.737488       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.759085       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.903274       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.932254       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:18.026817       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:18.028235       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c] <==
	I1002 20:46:23.370375       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 20:46:23.372417       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 20:46:23.372591       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 20:46:23.372699       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 20:46:23.372719       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 20:46:23.374418       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 20:46:23.377822       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 20:46:23.378717       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 20:46:23.378835       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 20:46:23.378887       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 20:46:23.380657       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 20:46:23.382728       1 cache.go:39] Caches are synced for autoregister controller
	E1002 20:46:23.382919       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 20:46:23.395328       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 20:46:23.395385       1 policy_source.go:240] refreshing policies
	I1002 20:46:23.401947       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 20:46:23.986330       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 20:46:24.173945       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 20:46:24.740359       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 20:46:24.787902       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 20:46:24.820331       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:46:24.831503       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 20:46:36.413048       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 20:46:36.416698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 20:46:36.420852       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea] <==
	I1002 20:46:26.665638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-762562"
	I1002 20:46:26.665810       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 20:46:26.663236       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 20:46:26.665926       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:46:26.663254       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 20:46:26.663586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:46:26.668003       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:46:26.670239       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:46:26.672446       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:46:26.672549       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:46:26.676032       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 20:46:26.678327       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 20:46:26.682649       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 20:46:26.684951       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:46:26.691225       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 20:46:26.699449       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 20:46:26.702726       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 20:46:26.709078       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 20:46:26.711792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:46:26.711806       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:46:26.711811       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:46:26.712844       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 20:46:26.714653       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 20:46:26.715008       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:46:26.726607       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2] <==
	I1002 20:45:56.463547       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:45:56.464492       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 20:45:56.465621       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:45:56.465716       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:45:56.470663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:45:56.472979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 20:45:56.476365       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 20:45:56.480809       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 20:45:56.480887       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 20:45:56.484091       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 20:45:56.485344       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:45:56.487747       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 20:45:56.493260       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 20:45:56.495674       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:45:56.504652       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 20:45:56.504705       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 20:45:56.505182       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 20:45:56.504797       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:45:56.504848       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:45:56.505610       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:45:56.505644       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:45:56.504874       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 20:45:56.504723       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 20:45:56.507480       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 20:45:56.519885       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2] <==
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 20:46:13.567798       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 20:46:13.567825       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:46:13.585272       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:46:13.585772       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:46:13.585813       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:46:13.593379       1 config.go:200] "Starting service config controller"
	I1002 20:46:13.593422       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:46:13.593447       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:46:13.593453       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:46:13.593492       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:46:13.593497       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:46:13.594102       1 config.go:309] "Starting node config controller"
	I1002 20:46:13.594112       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:46:13.594118       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:46:13.694037       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:46:13.694113       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:46:13.694506       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1002 20:46:18.150636       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": unexpected EOF"
	E1002 20:46:23.312573       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1002 20:46:23.312935       1 reflector.go:205] "Failed to watch" err="nodes \"pause-762562\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:46:23.313014       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:46:23.313044       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-proxy [5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f] <==
	I1002 20:45:51.070205       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:45:53.172811       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:45:53.172862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.218"]
	E1002 20:45:53.172936       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:45:53.344518       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 20:45:53.345299       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 20:45:53.345368       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:45:53.376778       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:45:53.378062       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:45:53.378237       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:45:53.385413       1 config.go:200] "Starting service config controller"
	I1002 20:45:53.385509       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:45:53.385538       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:45:53.385554       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:45:53.385583       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:45:53.385598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:45:53.385996       1 config.go:309] "Starting node config controller"
	I1002 20:45:53.386037       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:45:53.386046       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:45:53.485769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:45:53.485806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:45:53.485827       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538] <==
	I1002 20:45:51.795259       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:45:53.172628       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:45:53.172734       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:45:53.172757       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:45:53.173210       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:45:53.211575       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:45:53.211636       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:45:53.216073       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:45:53.216111       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:45:53.216509       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:45:53.216574       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:45:53.316546       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:46:00.865499       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 20:46:00.865646       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 20:46:00.865672       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 20:46:00.866245       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:46:00.866981       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 20:46:00.867096       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234] <==
	E1002 20:46:20.382694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.50.218:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:46:20.386074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.50.218:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:46:20.400041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.50.218:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 20:46:20.435724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.50.218:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:46:20.461500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.50.218:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:46:20.485977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.50.218:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:46:20.495806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.50.218:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:46:20.657725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.50.218:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:46:20.684315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.50.218:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:46:20.729348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.50.218:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:46:23.223803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 20:46:23.301350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:46:23.301479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:46:23.301540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:46:23.302312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:46:23.302667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:46:23.303599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:46:23.304255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:46:23.304341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:46:23.304371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 20:46:23.305206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:46:23.305255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:46:23.305315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:46:23.305350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1002 20:46:28.668249       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:46:21 pause-762562 kubelet[4177]: I1002 20:46:21.601522    4177 kubelet_node_status.go:75] "Attempting to register node" node="pause-762562"
	Oct 02 20:46:22 pause-762562 kubelet[4177]: E1002 20:46:22.138343    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:22 pause-762562 kubelet[4177]: E1002 20:46:22.139211    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:22 pause-762562 kubelet[4177]: E1002 20:46:22.139523    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.143333    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.143807    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.144278    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.254310    4177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.435061    4177 kubelet_node_status.go:124] "Node was previously registered" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.435221    4177 kubelet_node_status.go:78] "Successfully registered node" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.435248    4177 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.436969    4177 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.507611    4177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-762562\" already exists" pod="kube-system/etcd-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.507738    4177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.527430    4177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-762562\" already exists" pod="kube-system/kube-apiserver-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.527583    4177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.537799    4177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-762562\" already exists" pod="kube-system/kube-controller-manager-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.537965    4177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.549418    4177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-762562\" already exists" pod="kube-system/kube-scheduler-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.927039    4177 apiserver.go:52] "Watching apiserver"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.958311    4177 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.979650    4177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45b79789-7110-4e85-8a30-4b58f010d5c0-xtables-lock\") pod \"kube-proxy-v544h\" (UID: \"45b79789-7110-4e85-8a30-4b58f010d5c0\") " pod="kube-system/kube-proxy-v544h"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.979683    4177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45b79789-7110-4e85-8a30-4b58f010d5c0-lib-modules\") pod \"kube-proxy-v544h\" (UID: \"45b79789-7110-4e85-8a30-4b58f010d5c0\") " pod="kube-system/kube-proxy-v544h"
	Oct 02 20:46:30 pause-762562 kubelet[4177]: E1002 20:46:30.093978    4177 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759437990093605642  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 02 20:46:30 pause-762562 kubelet[4177]: E1002 20:46:30.094576    4177 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759437990093605642  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-762562 -n pause-762562
helpers_test.go:269: (dbg) Run:  kubectl --context pause-762562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-762562 -n pause-762562
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-762562 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-762562 logs -n 25: (3.496380405s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-446943 sudo systemctl cat docker --no-pager                                                                                                              │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /etc/docker/daemon.json                                                                                                                  │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo docker system info                                                                                                                           │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl status cri-docker --all --full --no-pager                                                                                          │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl cat cri-docker --no-pager                                                                                                          │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                     │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                               │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cri-dockerd --version                                                                                                                        │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl status containerd --all --full --no-pager                                                                                          │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl cat containerd --no-pager                                                                                                          │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /lib/systemd/system/containerd.service                                                                                                   │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo cat /etc/containerd/config.toml                                                                                                              │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo containerd config dump                                                                                                                       │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl status crio --all --full --no-pager                                                                                                │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo systemctl cat crio --no-pager                                                                                                                │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-446943 sudo crio config                                                                                                                                  │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ delete  │ -p cilium-446943                                                                                                                                                   │ cilium-446943             │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │ 02 Oct 25 20:45 UTC │
	│ start   │ -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-787090 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-571399 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-571399    │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │                     │
	│ delete  │ -p running-upgrade-571399                                                                                                                                          │ running-upgrade-571399    │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │ 02 Oct 25 20:46 UTC │
	│ ssh     │ -p NoKubernetes-555034 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-555034       │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │                     │
	│ delete  │ -p NoKubernetes-555034                                                                                                                                             │ NoKubernetes-555034       │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │ 02 Oct 25 20:46 UTC │
	│ start   │ -p stopped-upgrade-485667 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-485667    │ jenkins │ v1.32.0 │ 02 Oct 25 20:46 UTC │                     │
	│ start   │ -p cert-expiration-491886 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                   │ cert-expiration-491886    │ jenkins │ v1.37.0 │ 02 Oct 25 20:46 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:46:14
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:46:14.030906   51101 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:46:14.031236   51101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:46:14.031248   51101 out.go:374] Setting ErrFile to fd 2...
	I1002 20:46:14.031254   51101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:46:14.031584   51101 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 20:46:14.032310   51101 out.go:368] Setting JSON to false
	I1002 20:46:14.033707   51101 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5317,"bootTime":1759432657,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:46:14.033852   51101 start.go:140] virtualization: kvm guest
	I1002 20:46:14.035520   51101 out.go:179] * [cert-expiration-491886] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:46:14.036709   51101 notify.go:221] Checking for updates...
	I1002 20:46:14.036742   51101 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:46:14.037745   51101 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:46:14.039186   51101 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 20:46:14.040117   51101 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 20:46:14.041077   51101 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:46:14.042081   51101 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:46:14.043406   51101 config.go:182] Loaded profile config "kubernetes-upgrade-787090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 20:46:14.043529   51101 config.go:182] Loaded profile config "pause-762562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:14.043606   51101 config.go:182] Loaded profile config "stopped-upgrade-485667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1002 20:46:14.043688   51101 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:46:14.080757   51101 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 20:46:14.081943   51101 start.go:306] selected driver: kvm2
	I1002 20:46:14.081966   51101 start.go:936] validating driver "kvm2" against <nil>
	I1002 20:46:14.081975   51101 start.go:947] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:46:14.082823   51101 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:46:14.082903   51101 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:46:14.097308   51101 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:46:14.097333   51101 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:46:14.112940   51101 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 20:46:14.112979   51101 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:46:14.113258   51101 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:46:14.113275   51101 cni.go:84] Creating CNI manager for ""
	I1002 20:46:14.113316   51101 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:46:14.113320   51101 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:46:14.113361   51101 start.go:350] cluster config:
	{Name:cert-expiration-491886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-491886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:46:14.113442   51101 iso.go:125] acquiring lock: {Name:mkabc2fb4ac96edf87725f05149cf44e9a15d593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:46:14.115675   51101 out.go:179] * Starting "cert-expiration-491886" primary control-plane node in "cert-expiration-491886" cluster
	I1002 20:46:13.926866   51053 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1002 20:46:13.926912   51053 preload.go:148] Found local preload: /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1002 20:46:13.926922   51053 cache.go:56] Caching tarball of preloaded images
	I1002 20:46:13.927038   51053 preload.go:174] Found /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:46:13.927048   51053 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1002 20:46:13.927213   51053 profile.go:148] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/stopped-upgrade-485667/config.json ...
	I1002 20:46:13.927237   51053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/stopped-upgrade-485667/config.json: {Name:mkc240ce72d1c877953eba0ee5e377766a38e76e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:13.927449   51053 start.go:365] acquiring machines lock for stopped-upgrade-485667: {Name:mk83006c688982612686a8dbdd0b9c4ecd5d338c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 20:46:13.909116   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:13.909759   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:13.909790   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:13.910113   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:13.910166   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:13.910095   50867 retry.go:31] will retry after 475.797743ms: waiting for domain to come up
	I1002 20:46:14.388079   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:14.388789   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:14.388815   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:14.389229   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:14.389269   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:14.389204   50867 retry.go:31] will retry after 703.24373ms: waiting for domain to come up
	I1002 20:46:15.093597   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:15.094174   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:15.094202   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:15.094525   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:15.094546   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:15.094494   50867 retry.go:31] will retry after 1.13908592s: waiting for domain to come up
	I1002 20:46:16.235759   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:16.236365   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:16.236394   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:16.236708   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:16.236748   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:16.236651   50867 retry.go:31] will retry after 1.432989784s: waiting for domain to come up
	I1002 20:46:17.671385   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:17.672078   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:17.672102   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:17.672351   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:17.672369   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:17.672340   50867 retry.go:31] will retry after 1.709787351s: waiting for domain to come up
	I1002 20:46:14.116690   51101 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:46:14.116754   51101 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:46:14.116762   51101 cache.go:59] Caching tarball of preloaded images
	I1002 20:46:14.116890   51101 preload.go:233] Found /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:46:14.116901   51101 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:46:14.117035   51101 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/cert-expiration-491886/config.json ...
	I1002 20:46:14.117057   51101 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/cert-expiration-491886/config.json: {Name:mk2dbc2d3afa0d96241dceb5776cfe9b3406d0b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:14.117325   51101 start.go:361] acquireMachinesLock for cert-expiration-491886: {Name:mk83006c688982612686a8dbdd0b9c4ecd5d338c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 20:46:18.394094   47754 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b 5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f 009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538 afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934 300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427 3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2 f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00 ab87ccb20dd390ee4baa6b2fd84da917c1b052163a84e07245449fd4c55845cb 4ea71438985a2992f2bf50cd478490a6d639389f0e34268746695f217d99f8a8 e051a92c67661eb3bc5d520f9ab4ceb8c6b7f261a9235c0c3542faf922533b89 69466eb2938e65a72f1595e1a878d72261b434b2fad75031ba0f5f18463ba4a3 02ece0b6778b755e478b9dcc94630ca1d08c4759da97528343a46d0b78c36f2d: (27.804380819s)
	W1002 20:46:18.394184   47754 kubeadm.go:648] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b 5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f 009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538 afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934 300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427 3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2 f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00 ab87ccb20dd390ee4baa6b2fd84da917c1b052163a84e07245449fd4c55845cb 4ea71438985a2992f2bf50cd478490a6d639389f0e34268746695f217d99f8a8 e051a92c67661eb3bc5d520f9ab4ceb8c6b7f261a9235c0c3542faf922533b89 69466eb2938e65a72f1595e1a878d72261b434b2fad75031ba0f5f18463ba4a3 02ece0b6778b755e478b9dcc94630ca1d08c4759da97528343a46d0b78c36f2d: Process exited with status 1
	stdout:
	f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b
	5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f
	009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538
	afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934
	300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427
	3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2
	
	stderr:
	E1002 20:46:18.389345    3632 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00\": container with ID starting with f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00 not found: ID does not exist" containerID="f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00"
	time="2025-10-02T20:46:18Z" level=fatal msg="stopping the container \"f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00\": rpc error: code = NotFound desc = could not find container \"f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00\": container with ID starting with f63cc40a3b15274de61d35a468a3d688dde015b75fa166739e90e2b6f6396d00 not found: ID does not exist"
	I1002 20:46:18.394255   47754 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:46:18.430877   47754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:46:18.445039   47754 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  2 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5642 Oct  2 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Oct  2 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Oct  2 20:44 /etc/kubernetes/scheduler.conf
	
	I1002 20:46:18.445122   47754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:46:18.457407   47754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:46:18.469661   47754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:18.469740   47754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:46:18.483509   47754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:46:18.497176   47754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:18.497237   47754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:46:18.512510   47754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:46:18.526966   47754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:18.527028   47754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:46:18.540095   47754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:46:18.553313   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:18.611967   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:19.383898   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:19.384497   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:19.384526   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:19.384961   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:19.384992   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:19.384918   50867 retry.go:31] will retry after 1.893811672s: waiting for domain to come up
	I1002 20:46:21.280294   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:21.280994   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:21.281020   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:21.281320   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:21.281348   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:21.281299   50867 retry.go:31] will retry after 2.456569689s: waiting for domain to come up
	I1002 20:46:19.526029   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:19.828251   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:19.905577   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:20.004020   47754 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:46:20.004120   47754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:46:20.504473   47754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:46:21.004457   47754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:46:21.037649   47754 api_server.go:72] duration metric: took 1.033640349s to wait for apiserver process to appear ...
	I1002 20:46:21.037683   47754 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:46:21.037708   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:23.189380   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:46:23.189408   47754 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:46:23.189423   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:23.244050   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 20:46:23.244094   47754 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 20:46:23.538557   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:23.546990   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:46:23.547037   47754 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:46:24.038537   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:24.044755   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 20:46:24.044779   47754 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 20:46:24.538396   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:24.543706   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I1002 20:46:24.551760   47754 api_server.go:141] control plane version: v1.34.1
	I1002 20:46:24.551792   47754 api_server.go:131] duration metric: took 3.514101808s to wait for apiserver health ...
	I1002 20:46:24.551846   47754 cni.go:84] Creating CNI manager for ""
	I1002 20:46:24.551854   47754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:46:24.553320   47754 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:46:24.554540   47754 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:46:24.570698   47754 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1002 20:46:24.599794   47754 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:46:24.606330   47754 system_pods.go:59] 6 kube-system pods found
	I1002 20:46:24.606378   47754 system_pods.go:61] "coredns-66bc5c9577-9pqwk" [37e86407-39b3-4b89-a6d2-943913357f8d] Running
	I1002 20:46:24.606394   47754 system_pods.go:61] "etcd-pause-762562" [d6ff7716-87e5-456d-8635-b8f9eb552c54] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:46:24.606406   47754 system_pods.go:61] "kube-apiserver-pause-762562" [1d993398-0c38-4576-8850-b312521d95d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:46:24.606419   47754 system_pods.go:61] "kube-controller-manager-pause-762562" [4417516d-6470-4a03-96f6-85ce4fa96a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:46:24.606427   47754 system_pods.go:61] "kube-proxy-v544h" [45b79789-7110-4e85-8a30-4b58f010d5c0] Running
	I1002 20:46:24.606439   47754 system_pods.go:61] "kube-scheduler-pause-762562" [a3774909-2317-4f36-b15c-dcba06275c07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:46:24.606449   47754 system_pods.go:74] duration metric: took 6.624865ms to wait for pod list to return data ...
	I1002 20:46:24.606463   47754 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:46:24.612438   47754 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 20:46:24.612475   47754 node_conditions.go:123] node cpu capacity is 2
	I1002 20:46:24.612489   47754 node_conditions.go:105] duration metric: took 6.021325ms to run NodePressure ...
	I1002 20:46:24.612548   47754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:46:24.887522   47754 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 20:46:24.891798   47754 kubeadm.go:743] kubelet initialised
	I1002 20:46:24.891828   47754 kubeadm.go:744] duration metric: took 4.27595ms waiting for restarted kubelet to initialise ...
	I1002 20:46:24.891849   47754 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:46:24.908561   47754 ops.go:34] apiserver oom_adj: -16
	I1002 20:46:24.908589   47754 kubeadm.go:601] duration metric: took 34.456812853s to restartPrimaryControlPlane
	I1002 20:46:24.908604   47754 kubeadm.go:402] duration metric: took 34.675329785s to StartCluster
	I1002 20:46:24.908628   47754 settings.go:142] acquiring lock: {Name:mk6a3acbc81c910cfbdc018b811be13c1e438c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:24.908734   47754 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 20:46:24.909648   47754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/kubeconfig: {Name:mk0c75eb22a83f2f7ea4f564360059d4e6d21b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:24.909958   47754 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:46:24.910094   47754 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:46:24.910234   47754 config.go:182] Loaded profile config "pause-762562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:24.913854   47754 out.go:179] * Enabled addons: 
	I1002 20:46:24.913854   47754 out.go:179] * Verifying Kubernetes components...
	I1002 20:46:23.740768   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:23.741257   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:23.741283   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:23.741575   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:23.741599   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:23.741508   50867 retry.go:31] will retry after 2.567460998s: waiting for domain to come up
	I1002 20:46:26.310803   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:26.311304   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | no network interface addresses found for domain kubernetes-upgrade-787090 (source=lease)
	I1002 20:46:26.311327   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | trying to list again with source=arp
	I1002 20:46:26.311680   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find current IP address of domain kubernetes-upgrade-787090 in network mk-kubernetes-upgrade-787090 (interfaces detected: [])
	I1002 20:46:26.311705   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | I1002 20:46:26.311609   50867 retry.go:31] will retry after 3.98742618s: waiting for domain to come up
	I1002 20:46:24.915251   47754 addons.go:514] duration metric: took 5.156828ms for enable addons: enabled=[]
	I1002 20:46:24.915324   47754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:25.132447   47754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:25.151207   47754 node_ready.go:35] waiting up to 6m0s for node "pause-762562" to be "Ready" ...
	I1002 20:46:25.155516   47754 node_ready.go:49] node "pause-762562" is "Ready"
	I1002 20:46:25.155554   47754 node_ready.go:38] duration metric: took 4.288982ms for node "pause-762562" to be "Ready" ...
	I1002 20:46:25.155571   47754 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:46:25.155635   47754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:46:25.183783   47754 api_server.go:72] duration metric: took 273.783101ms to wait for apiserver process to appear ...
	I1002 20:46:25.183820   47754 api_server.go:88] waiting for apiserver healthz status ...
	I1002 20:46:25.183841   47754 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I1002 20:46:25.188309   47754 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I1002 20:46:25.189311   47754 api_server.go:141] control plane version: v1.34.1
	I1002 20:46:25.189343   47754 api_server.go:131] duration metric: took 5.514225ms to wait for apiserver health ...
	I1002 20:46:25.189355   47754 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 20:46:25.194635   47754 system_pods.go:59] 6 kube-system pods found
	I1002 20:46:25.194669   47754 system_pods.go:61] "coredns-66bc5c9577-9pqwk" [37e86407-39b3-4b89-a6d2-943913357f8d] Running
	I1002 20:46:25.194683   47754 system_pods.go:61] "etcd-pause-762562" [d6ff7716-87e5-456d-8635-b8f9eb552c54] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:46:25.194693   47754 system_pods.go:61] "kube-apiserver-pause-762562" [1d993398-0c38-4576-8850-b312521d95d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:46:25.194703   47754 system_pods.go:61] "kube-controller-manager-pause-762562" [4417516d-6470-4a03-96f6-85ce4fa96a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:46:25.194709   47754 system_pods.go:61] "kube-proxy-v544h" [45b79789-7110-4e85-8a30-4b58f010d5c0] Running
	I1002 20:46:25.194719   47754 system_pods.go:61] "kube-scheduler-pause-762562" [a3774909-2317-4f36-b15c-dcba06275c07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:46:25.194741   47754 system_pods.go:74] duration metric: took 5.378726ms to wait for pod list to return data ...
	I1002 20:46:25.194756   47754 default_sa.go:34] waiting for default service account to be created ...
	I1002 20:46:25.198211   47754 default_sa.go:45] found service account: "default"
	I1002 20:46:25.198244   47754 default_sa.go:55] duration metric: took 3.479823ms for default service account to be created ...
	I1002 20:46:25.198256   47754 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 20:46:25.201680   47754 system_pods.go:86] 6 kube-system pods found
	I1002 20:46:25.201710   47754 system_pods.go:89] "coredns-66bc5c9577-9pqwk" [37e86407-39b3-4b89-a6d2-943913357f8d] Running
	I1002 20:46:25.201735   47754 system_pods.go:89] "etcd-pause-762562" [d6ff7716-87e5-456d-8635-b8f9eb552c54] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 20:46:25.201745   47754 system_pods.go:89] "kube-apiserver-pause-762562" [1d993398-0c38-4576-8850-b312521d95d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 20:46:25.201757   47754 system_pods.go:89] "kube-controller-manager-pause-762562" [4417516d-6470-4a03-96f6-85ce4fa96a6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 20:46:25.201764   47754 system_pods.go:89] "kube-proxy-v544h" [45b79789-7110-4e85-8a30-4b58f010d5c0] Running
	I1002 20:46:25.201772   47754 system_pods.go:89] "kube-scheduler-pause-762562" [a3774909-2317-4f36-b15c-dcba06275c07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 20:46:25.201780   47754 system_pods.go:126] duration metric: took 3.517657ms to wait for k8s-apps to be running ...
	I1002 20:46:25.201788   47754 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:46:25.201831   47754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:46:25.220919   47754 system_svc.go:56] duration metric: took 19.119842ms WaitForService to wait for kubelet
	I1002 20:46:25.220954   47754 kubeadm.go:586] duration metric: took 310.958896ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:46:25.220973   47754 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:46:25.224539   47754 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1002 20:46:25.224574   47754 node_conditions.go:123] node cpu capacity is 2
	I1002 20:46:25.224591   47754 node_conditions.go:105] duration metric: took 3.610889ms to run NodePressure ...
	I1002 20:46:25.224608   47754 start.go:242] waiting for startup goroutines ...
	I1002 20:46:25.224618   47754 start.go:247] waiting for cluster config update ...
	I1002 20:46:25.224631   47754 start.go:256] writing updated cluster config ...
	I1002 20:46:25.225042   47754 ssh_runner.go:195] Run: rm -f paused
	I1002 20:46:25.231058   47754 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:46:25.231517   47754 kapi.go:59] client config for pause-762562: &rest.Config{Host:"https://192.168.50.218:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:25.234746   47754 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9pqwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:25.240558   47754 pod_ready.go:94] pod "coredns-66bc5c9577-9pqwk" is "Ready"
	I1002 20:46:25.240583   47754 pod_ready.go:86] duration metric: took 5.81326ms for pod "coredns-66bc5c9577-9pqwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:25.243280   47754 pod_ready.go:83] waiting for pod "etcd-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 20:46:27.249658   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	I1002 20:46:32.051674   51053 start.go:369] acquired machines lock for "stopped-upgrade-485667" in 18.124187556s
	I1002 20:46:32.051776   51053 start.go:93] Provisioning new machine with config: &{Name:stopped-upgrade-485667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopp
ed-upgrade-485667 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:46:32.051933   51053 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 20:46:32.055740   51053 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 20:46:32.055991   51053 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:46:32.056065   51053 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:46:32.072069   51053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
	I1002 20:46:32.072533   51053 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:46:32.073096   51053 main.go:141] libmachine: Using API Version  1
	I1002 20:46:32.073112   51053 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:46:32.073505   51053 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:46:32.073715   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .GetMachineName
	I1002 20:46:32.073860   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .DriverName
	I1002 20:46:32.074013   51053 start.go:159] libmachine.API.Create for "stopped-upgrade-485667" (driver="kvm2")
	I1002 20:46:32.074049   51053 client.go:168] LocalClient.Create starting
	I1002 20:46:32.074078   51053 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem
	I1002 20:46:32.074110   51053 main.go:141] libmachine: Decoding PEM data...
	I1002 20:46:32.074125   51053 main.go:141] libmachine: Parsing certificate...
	I1002 20:46:32.074171   51053 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem
	I1002 20:46:32.074191   51053 main.go:141] libmachine: Decoding PEM data...
	I1002 20:46:32.074199   51053 main.go:141] libmachine: Parsing certificate...
	I1002 20:46:32.074214   51053 main.go:141] libmachine: Running pre-create checks...
	I1002 20:46:32.074219   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .PreCreateCheck
	I1002 20:46:32.074647   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .GetConfigRaw
	I1002 20:46:32.075100   51053 main.go:141] libmachine: Creating machine...
	I1002 20:46:32.075109   51053 main.go:141] libmachine: (stopped-upgrade-485667) Calling .Create
	I1002 20:46:32.075252   51053 main.go:141] libmachine: (stopped-upgrade-485667) creating domain...
	I1002 20:46:32.075266   51053 main.go:141] libmachine: (stopped-upgrade-485667) creating network...
	I1002 20:46:32.076857   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | found existing default network
	I1002 20:46:32.077041   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | <network connections='2'>
	I1002 20:46:32.077058   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <name>default</name>
	I1002 20:46:32.077069   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1002 20:46:32.077084   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <forward mode='nat'>
	I1002 20:46:32.077095   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <nat>
	I1002 20:46:32.077104   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <port start='1024' end='65535'/>
	I1002 20:46:32.077111   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </nat>
	I1002 20:46:32.077116   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </forward>
	I1002 20:46:32.077122   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1002 20:46:32.077132   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1002 20:46:32.077139   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1002 20:46:32.077147   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <dhcp>
	I1002 20:46:32.077165   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1002 20:46:32.077182   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </dhcp>
	I1002 20:46:32.077197   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </ip>
	I1002 20:46:32.077205   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | </network>
	I1002 20:46:32.077217   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:32.077980   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:32.077836   51272 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123900}
	I1002 20:46:32.078065   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | defining private network:
	I1002 20:46:32.078091   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:32.078101   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | <network>
	I1002 20:46:32.078109   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <name>mk-stopped-upgrade-485667</name>
	I1002 20:46:32.078118   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <dns enable='no'/>
	I1002 20:46:32.078126   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 20:46:32.078136   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <dhcp>
	I1002 20:46:32.078143   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 20:46:32.078150   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </dhcp>
	I1002 20:46:32.078157   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </ip>
	I1002 20:46:32.078180   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | </network>
	I1002 20:46:32.078192   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:32.084108   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | creating private network mk-stopped-upgrade-485667 192.168.39.0/24...
	I1002 20:46:32.166658   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | private network mk-stopped-upgrade-485667 192.168.39.0/24 created
	I1002 20:46:32.166965   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | <network>
	I1002 20:46:32.166975   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <name>mk-stopped-upgrade-485667</name>
	I1002 20:46:32.166987   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <uuid>21a721e2-d788-49c0-88c5-b81f0f9ffff9</uuid>
	I1002 20:46:32.167006   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting up store path in /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667 ...
	I1002 20:46:32.167015   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I1002 20:46:32.167027   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <mac address='52:54:00:00:70:d3'/>
	I1002 20:46:32.167047   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <dns enable='no'/>
	I1002 20:46:32.167063   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1002 20:46:32.167069   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <dhcp>
	I1002 20:46:32.167078   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1002 20:46:32.167083   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </dhcp>
	I1002 20:46:32.167102   51053 main.go:141] libmachine: (stopped-upgrade-485667) building disk image from file:///home/jenkins/minikube-integration/21683-9524/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1002 20:46:32.167130   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </ip>
	I1002 20:46:32.167163   51053 main.go:141] libmachine: (stopped-upgrade-485667) Downloading /home/jenkins/minikube-integration/21683-9524/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21683-9524/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso...
	I1002 20:46:32.167183   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | </network>
	I1002 20:46:32.167198   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:32.167208   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:32.166963   51272 common.go:147] Making disk image using store path: /home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 20:46:32.377703   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:32.377548   51272 common.go:154] Creating ssh key: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/id_rsa...
	I1002 20:46:33.077050   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:33.076913   51272 common.go:160] Creating raw disk image: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/stopped-upgrade-485667.rawdisk...
	I1002 20:46:33.077080   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | Writing magic tar header
	I1002 20:46:33.077095   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | Writing SSH key tar header
	I1002 20:46:33.077103   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:33.077043   51272 common.go:174] Fixing permissions on /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667 ...
	I1002 20:46:33.077237   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667
	I1002 20:46:33.077262   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667 (perms=drwx------)
	I1002 20:46:33.077283   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524/.minikube/machines
	I1002 20:46:33.077298   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 20:46:33.077309   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-9524
	I1002 20:46:33.077320   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1002 20:46:33.077328   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home/jenkins
	I1002 20:46:33.077341   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration/21683-9524/.minikube/machines (perms=drwxr-xr-x)
	I1002 20:46:33.077351   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | checking permissions on dir: /home
	I1002 20:46:33.077371   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | skipping /home - not owner
	I1002 20:46:33.077383   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration/21683-9524/.minikube (perms=drwxr-xr-x)
	I1002 20:46:33.077396   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration/21683-9524 (perms=drwxrwxr-x)
	I1002 20:46:33.077406   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 20:46:33.077427   51053 main.go:141] libmachine: (stopped-upgrade-485667) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 20:46:33.077438   51053 main.go:141] libmachine: (stopped-upgrade-485667) defining domain...
	I1002 20:46:33.078675   51053 main.go:141] libmachine: (stopped-upgrade-485667) defining domain using XML: 
	I1002 20:46:33.078712   51053 main.go:141] libmachine: (stopped-upgrade-485667) <domain type='kvm'>
	I1002 20:46:33.078740   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <name>stopped-upgrade-485667</name>
	I1002 20:46:33.078755   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <memory unit='MiB'>3072</memory>
	I1002 20:46:33.078764   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <vcpu>2</vcpu>
	I1002 20:46:33.078771   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <features>
	I1002 20:46:33.078779   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <acpi/>
	I1002 20:46:33.078794   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <apic/>
	I1002 20:46:33.078804   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <pae/>
	I1002 20:46:33.078813   51053 main.go:141] libmachine: (stopped-upgrade-485667)   </features>
	I1002 20:46:33.078824   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <cpu mode='host-passthrough'>
	I1002 20:46:33.078840   51053 main.go:141] libmachine: (stopped-upgrade-485667)   </cpu>
	I1002 20:46:33.078850   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <os>
	I1002 20:46:33.078859   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <type>hvm</type>
	I1002 20:46:33.078870   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <boot dev='cdrom'/>
	I1002 20:46:33.078878   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <boot dev='hd'/>
	I1002 20:46:33.078887   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <bootmenu enable='no'/>
	I1002 20:46:33.078893   51053 main.go:141] libmachine: (stopped-upgrade-485667)   </os>
	I1002 20:46:33.078902   51053 main.go:141] libmachine: (stopped-upgrade-485667)   <devices>
	I1002 20:46:33.078915   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <disk type='file' device='cdrom'>
	I1002 20:46:33.078928   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/boot2docker.iso'/>
	I1002 20:46:33.078937   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <target dev='hdc' bus='scsi'/>
	I1002 20:46:33.078946   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <readonly/>
	I1002 20:46:33.078955   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </disk>
	I1002 20:46:33.078964   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <disk type='file' device='disk'>
	I1002 20:46:33.079014   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 20:46:33.079044   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/stopped-upgrade-485667.rawdisk'/>
	I1002 20:46:33.079054   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <target dev='hda' bus='virtio'/>
	I1002 20:46:33.079061   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </disk>
	I1002 20:46:33.079070   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <interface type='network'>
	I1002 20:46:33.079079   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <source network='mk-stopped-upgrade-485667'/>
	I1002 20:46:33.079092   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <model type='virtio'/>
	I1002 20:46:33.079099   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </interface>
	I1002 20:46:33.079122   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <interface type='network'>
	I1002 20:46:33.079136   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <source network='default'/>
	I1002 20:46:33.079142   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <model type='virtio'/>
	I1002 20:46:33.079157   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </interface>
	I1002 20:46:33.079163   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <serial type='pty'>
	I1002 20:46:33.079170   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <target port='0'/>
	I1002 20:46:33.079176   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </serial>
	I1002 20:46:33.079181   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <console type='pty'>
	I1002 20:46:33.079187   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <target type='serial' port='0'/>
	I1002 20:46:33.079191   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </console>
	I1002 20:46:33.079200   51053 main.go:141] libmachine: (stopped-upgrade-485667)     <rng model='virtio'>
	I1002 20:46:33.079206   51053 main.go:141] libmachine: (stopped-upgrade-485667)       <backend model='random'>/dev/random</backend>
	I1002 20:46:33.079211   51053 main.go:141] libmachine: (stopped-upgrade-485667)     </rng>
	I1002 20:46:33.079215   51053 main.go:141] libmachine: (stopped-upgrade-485667)   </devices>
	I1002 20:46:33.079220   51053 main.go:141] libmachine: (stopped-upgrade-485667) </domain>
	I1002 20:46:33.079224   51053 main.go:141] libmachine: (stopped-upgrade-485667) 
	I1002 20:46:33.083800   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:03:17:38 in network default
	I1002 20:46:33.084615   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:33.084666   51053 main.go:141] libmachine: (stopped-upgrade-485667) starting domain...
	I1002 20:46:33.084686   51053 main.go:141] libmachine: (stopped-upgrade-485667) ensuring networks are active...
	I1002 20:46:33.085689   51053 main.go:141] libmachine: (stopped-upgrade-485667) Ensuring network default is active
	I1002 20:46:33.086060   51053 main.go:141] libmachine: (stopped-upgrade-485667) Ensuring network mk-stopped-upgrade-485667 is active
	I1002 20:46:33.086887   51053 main.go:141] libmachine: (stopped-upgrade-485667) getting domain XML...
	I1002 20:46:33.088040   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | starting domain XML:
	I1002 20:46:33.088056   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | <domain type='kvm'>
	I1002 20:46:33.088067   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <name>stopped-upgrade-485667</name>
	I1002 20:46:33.088074   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <uuid>83e30974-a6f6-45a8-b9b1-32a27433eab3</uuid>
	I1002 20:46:33.088083   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <memory unit='KiB'>3145728</memory>
	I1002 20:46:33.088090   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1002 20:46:33.088099   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <vcpu placement='static'>2</vcpu>
	I1002 20:46:33.088105   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <os>
	I1002 20:46:33.088120   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1002 20:46:33.088126   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <boot dev='cdrom'/>
	I1002 20:46:33.088136   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <boot dev='hd'/>
	I1002 20:46:33.088144   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <bootmenu enable='no'/>
	I1002 20:46:33.088153   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </os>
	I1002 20:46:33.088161   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <features>
	I1002 20:46:33.088190   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <acpi/>
	I1002 20:46:33.088206   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <apic/>
	I1002 20:46:33.088216   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <pae/>
	I1002 20:46:33.088231   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </features>
	I1002 20:46:33.088253   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1002 20:46:33.088261   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <clock offset='utc'/>
	I1002 20:46:33.088272   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <on_poweroff>destroy</on_poweroff>
	I1002 20:46:33.088286   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <on_reboot>restart</on_reboot>
	I1002 20:46:33.088298   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <on_crash>destroy</on_crash>
	I1002 20:46:33.088311   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   <devices>
	I1002 20:46:33.088323   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1002 20:46:33.088332   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <disk type='file' device='cdrom'>
	I1002 20:46:33.088345   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <driver name='qemu' type='raw'/>
	I1002 20:46:33.088364   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/boot2docker.iso'/>
	I1002 20:46:33.088375   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <target dev='hdc' bus='scsi'/>
	I1002 20:46:33.088384   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <readonly/>
	I1002 20:46:33.088396   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1002 20:46:33.088404   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </disk>
	I1002 20:46:33.088415   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <disk type='file' device='disk'>
	I1002 20:46:33.088430   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1002 20:46:33.088456   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <source file='/home/jenkins/minikube-integration/21683-9524/.minikube/machines/stopped-upgrade-485667/stopped-upgrade-485667.rawdisk'/>
	I1002 20:46:33.088465   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <target dev='hda' bus='virtio'/>
	I1002 20:46:33.088484   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1002 20:46:33.088491   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </disk>
	I1002 20:46:33.088518   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1002 20:46:33.088535   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1002 20:46:33.088546   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </controller>
	I1002 20:46:33.088553   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1002 20:46:33.088562   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1002 20:46:33.088576   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1002 20:46:33.088591   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </controller>
	I1002 20:46:33.088606   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <interface type='network'>
	I1002 20:46:33.088617   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <mac address='52:54:00:31:36:40'/>
	I1002 20:46:33.088625   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <source network='mk-stopped-upgrade-485667'/>
	I1002 20:46:33.088638   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <model type='virtio'/>
	I1002 20:46:33.088646   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1002 20:46:33.088651   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </interface>
	I1002 20:46:33.088656   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <interface type='network'>
	I1002 20:46:33.088662   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <mac address='52:54:00:03:17:38'/>
	I1002 20:46:33.088667   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <source network='default'/>
	I1002 20:46:33.088691   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <model type='virtio'/>
	I1002 20:46:33.088703   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1002 20:46:33.088709   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </interface>
	I1002 20:46:33.088714   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <serial type='pty'>
	I1002 20:46:33.088733   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <target type='isa-serial' port='0'>
	I1002 20:46:33.088742   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |         <model name='isa-serial'/>
	I1002 20:46:33.088765   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       </target>
	I1002 20:46:33.088780   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </serial>
	I1002 20:46:33.088791   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <console type='pty'>
	I1002 20:46:33.088811   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <target type='serial' port='0'/>
	I1002 20:46:33.088820   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </console>
	I1002 20:46:33.088829   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <input type='mouse' bus='ps2'/>
	I1002 20:46:33.088839   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <input type='keyboard' bus='ps2'/>
	I1002 20:46:33.088852   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <audio id='1' type='none'/>
	I1002 20:46:33.088863   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <memballoon model='virtio'>
	I1002 20:46:33.088874   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1002 20:46:33.088883   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </memballoon>
	I1002 20:46:33.088892   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     <rng model='virtio'>
	I1002 20:46:33.088903   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <backend model='random'>/dev/random</backend>
	I1002 20:46:33.088914   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1002 20:46:33.088922   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |     </rng>
	I1002 20:46:33.088933   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG |   </devices>
	I1002 20:46:33.088940   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | </domain>
	I1002 20:46:33.088954   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | 
	I1002 20:46:30.303932   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.304688   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) found domain IP: 192.168.61.2
	I1002 20:46:30.304739   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has current primary IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.304749   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) reserving static IP address...
	I1002 20:46:30.305112   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-787090", mac: "52:54:00:13:32:89", ip: "192.168.61.2"} in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.505181   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) reserved static IP address 192.168.61.2 for domain kubernetes-upgrade-787090
	I1002 20:46:30.505216   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | Getting to WaitForSSH function...
	I1002 20:46:30.505277   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) waiting for SSH...
	I1002 20:46:30.508031   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.508401   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.508432   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.508571   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | Using SSH client type: external
	I1002 20:46:30.508613   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa (-rw-------)
	I1002 20:46:30.508660   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 20:46:30.508678   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | About to run SSH command:
	I1002 20:46:30.508699   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | exit 0
	I1002 20:46:30.639766   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | SSH cmd err, output: <nil>: 
	I1002 20:46:30.640169   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) domain creation complete
	I1002 20:46:30.640480   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetConfigRaw
	I1002 20:46:30.641058   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:30.641262   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:30.641438   50605 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 20:46:30.641454   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetState
	I1002 20:46:30.642915   50605 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 20:46:30.642929   50605 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 20:46:30.642935   50605 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 20:46:30.642940   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:30.645307   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.645663   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.645686   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.645856   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:30.646045   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.646210   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.646347   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:30.646498   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:30.646739   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:30.646757   50605 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 20:46:30.745531   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:46:30.745561   50605 main.go:141] libmachine: Detecting the provisioner...
	I1002 20:46:30.745571   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:30.749173   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.749554   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.749593   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.749736   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:30.749927   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.750109   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.750300   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:30.750464   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:30.750679   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:30.750693   50605 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 20:46:30.852488   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1002 20:46:30.852556   50605 main.go:141] libmachine: found compatible host: buildroot
	I1002 20:46:30.852563   50605 main.go:141] libmachine: Provisioning with buildroot...
	I1002 20:46:30.852579   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetMachineName
	I1002 20:46:30.852881   50605 buildroot.go:166] provisioning hostname "kubernetes-upgrade-787090"
	I1002 20:46:30.852908   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetMachineName
	I1002 20:46:30.853115   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:30.856458   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.856892   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.856921   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.857260   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:30.857475   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.857661   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.857842   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:30.858034   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:30.858247   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:30.858260   50605 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-787090 && echo "kubernetes-upgrade-787090" | sudo tee /etc/hostname
	I1002 20:46:30.978376   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-787090
	
	I1002 20:46:30.978406   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:30.981623   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.982065   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:30.982101   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:30.982291   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:30.982596   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.982813   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:30.982973   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:30.983146   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:30.983370   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:30.983388   50605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-787090' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-787090/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-787090' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:46:31.096257   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:46:31.096284   50605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9524/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9524/.minikube}
	I1002 20:46:31.096308   50605 buildroot.go:174] setting up certificates
	I1002 20:46:31.096319   50605 provision.go:84] configureAuth start
	I1002 20:46:31.096327   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetMachineName
	I1002 20:46:31.096638   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetIP
	I1002 20:46:31.099410   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.099816   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.099845   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.100034   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.103598   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.103973   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.104004   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.104216   50605 provision.go:143] copyHostCerts
	I1002 20:46:31.104286   50605 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem, removing ...
	I1002 20:46:31.104303   50605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem
	I1002 20:46:31.104359   50605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/ca.pem (1082 bytes)
	I1002 20:46:31.104460   50605 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem, removing ...
	I1002 20:46:31.104468   50605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem
	I1002 20:46:31.104490   50605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/cert.pem (1123 bytes)
	I1002 20:46:31.104556   50605 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem, removing ...
	I1002 20:46:31.104563   50605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem
	I1002 20:46:31.104581   50605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9524/.minikube/key.pem (1679 bytes)
	I1002 20:46:31.104635   50605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-787090 san=[127.0.0.1 192.168.61.2 kubernetes-upgrade-787090 localhost minikube]
	I1002 20:46:31.356494   50605 provision.go:177] copyRemoteCerts
	I1002 20:46:31.356550   50605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:46:31.356574   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.359489   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.359894   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.359926   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.360097   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:31.360286   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.360418   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:31.360571   50605 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa Username:docker}
	I1002 20:46:31.445836   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:46:31.478892   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 20:46:31.512757   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:46:31.545533   50605 provision.go:87] duration metric: took 449.202242ms to configureAuth
	I1002 20:46:31.545561   50605 buildroot.go:189] setting minikube options for container-runtime
	I1002 20:46:31.545767   50605 config.go:182] Loaded profile config "kubernetes-upgrade-787090": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 20:46:31.545844   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.549071   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.549523   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.549556   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.549799   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:31.550020   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.550209   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.550331   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:31.550524   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:31.550753   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:31.550774   50605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:46:31.797669   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:46:31.797697   50605 main.go:141] libmachine: Checking connection to Docker...
	I1002 20:46:31.797707   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetURL
	I1002 20:46:31.799126   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | using libvirt version 8000000
	I1002 20:46:31.802033   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.802372   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.802402   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.802582   50605 main.go:141] libmachine: Docker is up and running!
	I1002 20:46:31.802597   50605 main.go:141] libmachine: Reticulating splines...
	I1002 20:46:31.802604   50605 client.go:171] duration metric: took 21.34466823s to LocalClient.Create
	I1002 20:46:31.802626   50605 start.go:168] duration metric: took 21.344742296s to libmachine.API.Create "kubernetes-upgrade-787090"
	I1002 20:46:31.802636   50605 start.go:294] postStartSetup for "kubernetes-upgrade-787090" (driver="kvm2")
	I1002 20:46:31.802644   50605 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:46:31.802668   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:31.802899   50605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:46:31.802921   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.805293   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.805571   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.805620   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.805761   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:31.805935   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.806104   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:31.806270   50605 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa Username:docker}
	I1002 20:46:31.889594   50605 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:46:31.894916   50605 info.go:137] Remote host: Buildroot 2025.02
	I1002 20:46:31.894943   50605 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/addons for local assets ...
	I1002 20:46:31.895022   50605 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9524/.minikube/files for local assets ...
	I1002 20:46:31.895109   50605 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem -> 134492.pem in /etc/ssl/certs
	I1002 20:46:31.895231   50605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:46:31.907489   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem --> /etc/ssl/certs/134492.pem (1708 bytes)
	I1002 20:46:31.938717   50605 start.go:297] duration metric: took 136.067235ms for postStartSetup
	I1002 20:46:31.938796   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetConfigRaw
	I1002 20:46:31.939513   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetIP
	I1002 20:46:31.942363   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.942746   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.942781   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.943000   50605 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/config.json ...
	I1002 20:46:31.943215   50605 start.go:129] duration metric: took 21.507355283s to createHost
	I1002 20:46:31.943238   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:31.945565   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.945938   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:31.945974   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:31.946101   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:31.946305   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.946490   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:31.946643   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:31.946838   50605 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:31.947039   50605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1002 20:46:31.947049   50605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 20:46:32.051461   50605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1759437992.017112160
	
	I1002 20:46:32.051486   50605 fix.go:217] guest clock: 1759437992.017112160
	I1002 20:46:32.051497   50605 fix.go:230] Guest: 2025-10-02 20:46:32.01711216 +0000 UTC Remote: 2025-10-02 20:46:31.943227242 +0000 UTC m=+38.257701484 (delta=73.884918ms)
	I1002 20:46:32.051545   50605 fix.go:201] guest clock delta is within tolerance: 73.884918ms
	I1002 20:46:32.051551   50605 start.go:84] releasing machines lock for "kubernetes-upgrade-787090", held for 21.615891533s
	I1002 20:46:32.051581   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:32.051884   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetIP
	I1002 20:46:32.055232   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.055670   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:32.055702   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.055930   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:32.056453   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:32.056659   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .DriverName
	I1002 20:46:32.056783   50605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:46:32.056828   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:32.056899   50605 ssh_runner.go:195] Run: cat /version.json
	I1002 20:46:32.056928   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHHostname
	I1002 20:46:32.060329   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.060420   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.060779   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:32.060815   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.060847   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:32.060864   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:32.061016   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:32.061158   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHPort
	I1002 20:46:32.061261   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:32.061330   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHKeyPath
	I1002 20:46:32.061402   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:32.061473   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetSSHUsername
	I1002 20:46:32.061537   50605 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa Username:docker}
	I1002 20:46:32.061580   50605 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/kubernetes-upgrade-787090/id_rsa Username:docker}
	I1002 20:46:32.185063   50605 ssh_runner.go:195] Run: systemctl --version
	I1002 20:46:32.192258   50605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:46:32.373188   50605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:46:32.381268   50605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:46:32.381340   50605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:46:32.404890   50605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:46:32.404919   50605 start.go:496] detecting cgroup driver to use...
	I1002 20:46:32.404982   50605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:46:32.425834   50605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:46:32.445335   50605 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:46:32.445450   50605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:46:32.465844   50605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:46:32.487313   50605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:46:32.655116   50605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:46:32.862992   50605 docker.go:234] disabling docker service ...
	I1002 20:46:32.863056   50605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:46:32.884305   50605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:46:32.901142   50605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:46:33.070505   50605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:46:33.231851   50605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:46:33.250500   50605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:46:33.275416   50605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 20:46:33.275479   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.289196   50605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 20:46:33.289300   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.303326   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.317345   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.331450   50605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:46:33.346446   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.360427   50605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.384390   50605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:33.399181   50605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:46:33.412373   50605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 20:46:33.412450   50605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 20:46:33.434962   50605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:46:33.448505   50605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:33.599553   50605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:46:33.721806   50605 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:46:33.721887   50605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:46:33.727813   50605 start.go:564] Will wait 60s for crictl version
	I1002 20:46:33.727884   50605 ssh_runner.go:195] Run: which crictl
	I1002 20:46:33.732522   50605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 20:46:33.787517   50605 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1002 20:46:33.787622   50605 ssh_runner.go:195] Run: crio --version
	I1002 20:46:33.821256   50605 ssh_runner.go:195] Run: crio --version
	I1002 20:46:33.856834   50605 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.29.1 ...
	W1002 20:46:29.250859   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	W1002 20:46:31.751256   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	W1002 20:46:33.755241   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	W1002 20:46:36.251127   47754 pod_ready.go:104] pod "etcd-pause-762562" is not "Ready", error: <nil>
	I1002 20:46:36.749958   47754 pod_ready.go:94] pod "etcd-pause-762562" is "Ready"
	I1002 20:46:36.749985   47754 pod_ready.go:86] duration metric: took 11.506680188s for pod "etcd-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.753552   47754 pod_ready.go:83] waiting for pod "kube-apiserver-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.759584   47754 pod_ready.go:94] pod "kube-apiserver-pause-762562" is "Ready"
	I1002 20:46:36.759619   47754 pod_ready.go:86] duration metric: took 6.033444ms for pod "kube-apiserver-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.763548   47754 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.770155   47754 pod_ready.go:94] pod "kube-controller-manager-pause-762562" is "Ready"
	I1002 20:46:36.770178   47754 pod_ready.go:86] duration metric: took 6.599923ms for pod "kube-controller-manager-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.772741   47754 pod_ready.go:83] waiting for pod "kube-proxy-v544h" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:36.947495   47754 pod_ready.go:94] pod "kube-proxy-v544h" is "Ready"
	I1002 20:46:36.947533   47754 pod_ready.go:86] duration metric: took 174.763745ms for pod "kube-proxy-v544h" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:37.147940   47754 pod_ready.go:83] waiting for pod "kube-scheduler-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:37.548851   47754 pod_ready.go:94] pod "kube-scheduler-pause-762562" is "Ready"
	I1002 20:46:37.548884   47754 pod_ready.go:86] duration metric: took 400.907691ms for pod "kube-scheduler-pause-762562" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 20:46:37.548898   47754 pod_ready.go:40] duration metric: took 12.317805308s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 20:46:37.600605   47754 start.go:627] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1002 20:46:37.605714   47754 out.go:179] * Done! kubectl is now configured to use "pause-762562" cluster and "default" namespace by default
	I1002 20:46:34.571779   51053 main.go:141] libmachine: (stopped-upgrade-485667) waiting for domain to start...
	I1002 20:46:34.573699   51053 main.go:141] libmachine: (stopped-upgrade-485667) domain is now running
	I1002 20:46:34.573738   51053 main.go:141] libmachine: (stopped-upgrade-485667) waiting for IP...
	I1002 20:46:34.574829   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:34.575499   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | no network interface addresses found for domain stopped-upgrade-485667 (source=lease)
	I1002 20:46:34.575522   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | trying to list again with source=arp
	I1002 20:46:34.575914   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | unable to find current IP address of domain stopped-upgrade-485667 in network mk-stopped-upgrade-485667 (interfaces detected: [])
	I1002 20:46:34.575939   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:34.575905   51272 retry.go:31] will retry after 249.568421ms: waiting for domain to come up
	I1002 20:46:34.827760   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:34.828457   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | no network interface addresses found for domain stopped-upgrade-485667 (source=lease)
	I1002 20:46:34.828481   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | trying to list again with source=arp
	I1002 20:46:34.828852   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | unable to find current IP address of domain stopped-upgrade-485667 in network mk-stopped-upgrade-485667 (interfaces detected: [])
	I1002 20:46:34.828874   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:34.828827   51272 retry.go:31] will retry after 247.676732ms: waiting for domain to come up
	I1002 20:46:35.078406   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:35.079082   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | no network interface addresses found for domain stopped-upgrade-485667 (source=lease)
	I1002 20:46:35.079130   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | trying to list again with source=arp
	I1002 20:46:35.079380   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | unable to find current IP address of domain stopped-upgrade-485667 in network mk-stopped-upgrade-485667 (interfaces detected: [])
	I1002 20:46:35.079403   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:35.079372   51272 retry.go:31] will retry after 432.22946ms: waiting for domain to come up
	I1002 20:46:35.513131   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:35.513921   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | no network interface addresses found for domain stopped-upgrade-485667 (source=lease)
	I1002 20:46:35.513945   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | trying to list again with source=arp
	I1002 20:46:35.514406   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | unable to find current IP address of domain stopped-upgrade-485667 in network mk-stopped-upgrade-485667 (interfaces detected: [])
	I1002 20:46:35.514438   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:35.514285   51272 retry.go:31] will retry after 401.294905ms: waiting for domain to come up
	I1002 20:46:35.917022   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:35.917671   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | no network interface addresses found for domain stopped-upgrade-485667 (source=lease)
	I1002 20:46:35.917692   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | trying to list again with source=arp
	I1002 20:46:35.918162   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | unable to find current IP address of domain stopped-upgrade-485667 in network mk-stopped-upgrade-485667 (interfaces detected: [])
	I1002 20:46:35.918187   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:35.918119   51272 retry.go:31] will retry after 459.883125ms: waiting for domain to come up
	I1002 20:46:36.380231   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:36.381145   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | no network interface addresses found for domain stopped-upgrade-485667 (source=lease)
	I1002 20:46:36.381179   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | trying to list again with source=arp
	I1002 20:46:36.381416   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | unable to find current IP address of domain stopped-upgrade-485667 in network mk-stopped-upgrade-485667 (interfaces detected: [])
	I1002 20:46:36.381440   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:36.381394   51272 retry.go:31] will retry after 598.549774ms: waiting for domain to come up
	I1002 20:46:36.982415   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:36.983144   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | no network interface addresses found for domain stopped-upgrade-485667 (source=lease)
	I1002 20:46:36.983183   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | trying to list again with source=arp
	I1002 20:46:36.983531   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | unable to find current IP address of domain stopped-upgrade-485667 in network mk-stopped-upgrade-485667 (interfaces detected: [])
	I1002 20:46:36.983567   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:36.983498   51272 retry.go:31] will retry after 1.147522409s: waiting for domain to come up
	I1002 20:46:38.133225   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | domain stopped-upgrade-485667 has defined MAC address 52:54:00:31:36:40 in network mk-stopped-upgrade-485667
	I1002 20:46:38.133889   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | no network interface addresses found for domain stopped-upgrade-485667 (source=lease)
	I1002 20:46:38.133918   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | trying to list again with source=arp
	I1002 20:46:38.134281   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | unable to find current IP address of domain stopped-upgrade-485667 in network mk-stopped-upgrade-485667 (interfaces detected: [])
	I1002 20:46:38.134297   51053 main.go:141] libmachine: (stopped-upgrade-485667) DBG | I1002 20:46:38.134202   51272 retry.go:31] will retry after 1.118886861s: waiting for domain to come up
	I1002 20:46:33.858018   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) Calling .GetIP
	I1002 20:46:33.861679   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:33.862202   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:32:89", ip: ""} in network mk-kubernetes-upgrade-787090: {Iface:virbr1 ExpiryTime:2025-10-02 21:46:26 +0000 UTC Type:0 Mac:52:54:00:13:32:89 Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:kubernetes-upgrade-787090 Clientid:01:52:54:00:13:32:89}
	I1002 20:46:33.862237   50605 main.go:141] libmachine: (kubernetes-upgrade-787090) DBG | domain kubernetes-upgrade-787090 has defined IP address 192.168.61.2 and MAC address 52:54:00:13:32:89 in network mk-kubernetes-upgrade-787090
	I1002 20:46:33.862576   50605 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1002 20:46:33.867861   50605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:33.884245   50605 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-787090 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.0 ClusterName:kubernetes-upgrade-787090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disable
CoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:46:33.884376   50605 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 20:46:33.884440   50605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:33.930865   50605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I1002 20:46:33.930946   50605 ssh_runner.go:195] Run: which lz4
	I1002 20:46:33.936225   50605 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1002 20:46:33.942320   50605 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 20:46:33.942351   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I1002 20:46:36.020791   50605 crio.go:462] duration metric: took 2.084601795s to copy over tarball
	I1002 20:46:36.020867   50605 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 20:46:38.150754   50605 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.129839621s)
	I1002 20:46:38.150791   50605 crio.go:469] duration metric: took 2.129964677s to extract the tarball
	I1002 20:46:38.150802   50605 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 20:46:38.198438   50605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:38.264963   50605 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:46:38.264993   50605 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:46:38.265003   50605 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.28.0 crio true true} ...
	I1002 20:46:38.265134   50605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-787090 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-787090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:46:38.265261   50605 ssh_runner.go:195] Run: crio config
	I1002 20:46:38.320424   50605 cni.go:84] Creating CNI manager for ""
	I1002 20:46:38.320459   50605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 20:46:38.320483   50605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:46:38.320511   50605 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-787090 NodeName:kubernetes-upgrade-787090 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:46:38.320696   50605 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-787090"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:46:38.320782   50605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1002 20:46:38.339556   50605 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:46:38.339654   50605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:46:38.357398   50605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1002 20:46:38.382701   50605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:46:38.413711   50605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I1002 20:46:38.440738   50605 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:46:38.445642   50605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:38.462390   50605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:38.636553   50605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:38.693083   50605 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090 for IP: 192.168.61.2
	I1002 20:46:38.693108   50605 certs.go:195] generating shared ca certs ...
	I1002 20:46:38.693130   50605 certs.go:227] acquiring lock for ca certs: {Name:mk36b72fb138c08da6f63c209f5b6ddd4af4f5a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:38.693318   50605 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key
	I1002 20:46:38.693386   50605 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key
	I1002 20:46:38.693405   50605 certs.go:257] generating profile certs ...
	I1002 20:46:38.693480   50605 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/client.key
	I1002 20:46:38.693512   50605 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/client.crt with IP's: []
	I1002 20:46:39.091194   50605 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/client.crt ...
	I1002 20:46:39.091224   50605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/client.crt: {Name:mk6d9ccd4751b64db91769ff4b677ef0e503b990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:39.091433   50605 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/client.key ...
	I1002 20:46:39.091452   50605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/client.key: {Name:mk545d8ddd5d868c2f8ddba4605ea3e979179c2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:39.091585   50605 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.key.8976e7cb
	I1002 20:46:39.091622   50605 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.crt.8976e7cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.2]
	I1002 20:46:39.307074   50605 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.crt.8976e7cb ...
	I1002 20:46:39.307105   50605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.crt.8976e7cb: {Name:mkf8c2003b22b9769e015e3c4ab65dd241fe0899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:39.307321   50605 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.key.8976e7cb ...
	I1002 20:46:39.307343   50605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.key.8976e7cb: {Name:mk5d62ab7361728d828b52b9890c65e6faae0b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:39.307449   50605 certs.go:382] copying /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.crt.8976e7cb -> /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.crt
	I1002 20:46:39.307561   50605 certs.go:386] copying /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.key.8976e7cb -> /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.key
	I1002 20:46:39.307647   50605 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/proxy-client.key
	I1002 20:46:39.307666   50605 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/proxy-client.crt with IP's: []
	I1002 20:46:39.542408   50605 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/proxy-client.crt ...
	I1002 20:46:39.542444   50605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/proxy-client.crt: {Name:mkd79d279609b6a0deb5a93b8a02f9a1a50f38a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:39.542656   50605 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/proxy-client.key ...
	I1002 20:46:39.542678   50605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/proxy-client.key: {Name:mk03fb0060fdc2f08f7d55f8b83ab0b1bd02f1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:39.542909   50605 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/13449.pem (1338 bytes)
	W1002 20:46:39.542956   50605 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9524/.minikube/certs/13449_empty.pem, impossibly tiny 0 bytes
	I1002 20:46:39.542965   50605 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 20:46:39.542997   50605 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:46:39.543033   50605 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:46:39.543061   50605 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/certs/key.pem (1679 bytes)
	I1002 20:46:39.543116   50605 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem (1708 bytes)
	I1002 20:46:39.543772   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:46:39.604822   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:46:39.658580   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:46:39.700954   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 20:46:39.742120   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 20:46:39.782163   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:46:39.824540   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:46:39.862219   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/kubernetes-upgrade-787090/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:46:39.898935   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/certs/13449.pem --> /usr/share/ca-certificates/13449.pem (1338 bytes)
	I1002 20:46:39.937981   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/ssl/certs/134492.pem --> /usr/share/ca-certificates/134492.pem (1708 bytes)
	I1002 20:46:39.976183   50605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:46:40.012094   50605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:46:40.038820   50605 ssh_runner.go:195] Run: openssl version
	I1002 20:46:40.046647   50605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134492.pem && ln -fs /usr/share/ca-certificates/134492.pem /etc/ssl/certs/134492.pem"
	I1002 20:46:40.064158   50605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134492.pem
	I1002 20:46:40.071052   50605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 19:56 /usr/share/ca-certificates/134492.pem
	I1002 20:46:40.071120   50605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134492.pem
	I1002 20:46:40.079416   50605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134492.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:46:40.095191   50605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:46:40.110893   50605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:40.117019   50605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:48 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:40.117091   50605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:40.125611   50605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:46:40.143012   50605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13449.pem && ln -fs /usr/share/ca-certificates/13449.pem /etc/ssl/certs/13449.pem"
	I1002 20:46:40.159346   50605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13449.pem
	I1002 20:46:40.165616   50605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 19:56 /usr/share/ca-certificates/13449.pem
	I1002 20:46:40.165690   50605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13449.pem
	I1002 20:46:40.174068   50605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13449.pem /etc/ssl/certs/51391683.0"
	I1002 20:46:40.189716   50605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:46:40.195406   50605 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:46:40.195486   50605 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-787090 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.0 ClusterName:kubernetes-upgrade-787090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCor
eDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:46:40.195572   50605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:46:40.195630   50605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:46:40.251097   50605 cri.go:89] found id: ""
	I1002 20:46:40.251196   50605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:46:40.265301   50605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:46:40.285106   50605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:46:40.304688   50605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:46:40.304710   50605 kubeadm.go:157] found existing configuration files:
	
	I1002 20:46:40.304773   50605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:46:40.322404   50605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:46:40.322471   50605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:46:40.340613   50605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:46:40.359771   50605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:46:40.359837   50605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:46:40.381820   50605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:46:40.398857   50605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:46:40.398921   50605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:46:40.416754   50605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:46:40.433593   50605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:46:40.433672   50605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:46:40.451539   50605 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 20:46:40.514164   50605 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1002 20:46:40.514242   50605 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:46:40.682558   50605 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:46:40.682737   50605 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:46:40.682921   50605 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 20:46:40.900945   50605 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.538354538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759438002538334021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27b358ef-a76f-482c-bd64-499c377ea5aa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.539002437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=902e1731-bca8-4cbe-9a85-c6fcb2fef068 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.539123954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=902e1731-bca8-4cbe-9a85-c6fcb2fef068 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.539476961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759437980630572818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759437980621452909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759437980620710836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca602dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759437978215562378,Labels:map[string]string{io.kubernetes.container.name: co
redns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2
,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759437973203033751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759437971173387418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca6
02dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759437949938105336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759437948766969407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759437948580729552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759437948562103797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.has
h: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759437948519122158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759437948165817497,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=902e1731-bca8-4cbe-9a85-c6fcb2fef068 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.588418624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c29a5921-5585-4414-b00f-db41c38c3a19 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.588525857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c29a5921-5585-4414-b00f-db41c38c3a19 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.590514902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=caf86b03-31bd-44b6-8a1f-de0b8c99604d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.591079114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759438002591051474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=caf86b03-31bd-44b6-8a1f-de0b8c99604d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.591968970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a29c2145-3a62-444f-8a5d-71d46cad3b2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.592044971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a29c2145-3a62-444f-8a5d-71d46cad3b2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.592382429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759437980630572818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759437980621452909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759437980620710836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca602dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759437978215562378,Labels:map[string]string{io.kubernetes.container.name: co
redns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2
,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759437973203033751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759437971173387418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca6
02dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759437949938105336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759437948766969407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759437948580729552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759437948562103797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.has
h: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759437948519122158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759437948165817497,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a29c2145-3a62-444f-8a5d-71d46cad3b2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.662491595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4fad5112-c3ac-4ab8-9cd2-82a170236a6a name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.662713085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4fad5112-c3ac-4ab8-9cd2-82a170236a6a name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.664682053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96b622fd-3b5d-4e72-9058-74c9662fa019 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.665509674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759438002665487784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96b622fd-3b5d-4e72-9058-74c9662fa019 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.666748111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=398ac508-7fbb-41c0-b102-7f947efe1fc2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.666898423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=398ac508-7fbb-41c0-b102-7f947efe1fc2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.667381687Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759437980630572818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759437980621452909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759437980620710836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca602dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759437978215562378,Labels:map[string]string{io.kubernetes.container.name: co
redns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2
,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759437973203033751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759437971173387418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca6
02dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759437949938105336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759437948766969407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759437948580729552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759437948562103797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.has
h: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759437948519122158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759437948165817497,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=398ac508-7fbb-41c0-b102-7f947efe1fc2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.733917176Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4154280a-ff14-41a5-9a1a-a0cb170a2863 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.734033423Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4154280a-ff14-41a5-9a1a-a0cb170a2863 name=/runtime.v1.RuntimeService/Version
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.735570768Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fea9a18-2e2e-4c1b-bfef-dbca3c9652f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.736373256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1759438002736339955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fea9a18-2e2e-4c1b-bfef-dbca3c9652f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.737351959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55199081-92ae-4918-89e7-92c7da074c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.737495055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55199081-92ae-4918-89e7-92c7da074c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 02 20:46:42 pause-762562 crio[2816]: time="2025-10-02 20:46:42.737899076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1759437980630572818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1759437980621452909,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e9e2
0c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1759437980620710836,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca602dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1759437978215562378,Labels:map[string]string{io.kubernetes.container.name: co
redns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2
,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1759437973203033751,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1759437971173387418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b,PodSandboxId:71ac4f78242c2dd4cd867fca279a66f60905b776fd2ca6
02dbd1e43630c1ae20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1759437949938105336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-9pqwk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37e86407-39b3-4b89-a6d2-943913357f8d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f,PodSandboxId:9228cd5ff0c1e4a7ba0fad5532e6b2295f68862109d9486f1504688717d7f7ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1759437948766969407,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v544h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45b79789-7110-4e85-8a30-4b58f010d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538,PodSandboxId:a1519324ed9a7ffb611e1c68b8eccd5ec161e6b5bba235c5455d72eac1b329e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1759437948580729552,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa18f3ebb8362be0ed98f491ff4a767,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934,PodSandboxId:ef8699ca9248be216fc1ac0d35f4909f997522ae89638b5edc93574a48344f4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1759437948562103797,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c8fcc22067486fa533a98ce78c33ddd,},Annotations:map[string]string{io.kubernetes.container.has
h: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427,PodSandboxId:fabda5ffa77a2a41a6b535f4de550002cbcb80aab75ea20c8b82326a0b32f39f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1759437948519122158,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5a5ddca6b5565d14bff819e2e1ae8dde,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2,PodSandboxId:79084517cb61a63633ebbf6f0ada2bbb76df21f6998b8fd2fa7eaa6112c4d039,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1759437948165817497,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 409ca0f072a2582523b166fc4166d77e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55199081-92ae-4918-89e7-92c7da074c06 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f35539b42c1c2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   22 seconds ago      Running             kube-apiserver            2                   fabda5ffa77a2       kube-apiserver-pause-762562
	d30719ae7ea42       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   22 seconds ago      Running             etcd                      2                   ef8699ca9248b       etcd-pause-762562
	1db5c4e94a2ca       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   22 seconds ago      Running             kube-controller-manager   2                   79084517cb61a       kube-controller-manager-pause-762562
	415b84c3bfa0e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   25 seconds ago      Running             coredns                   2                   71ac4f78242c2       coredns-66bc5c9577-9pqwk
	1e5f90d35eacc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   30 seconds ago      Running             kube-proxy                2                   9228cd5ff0c1e       kube-proxy-v544h
	acc505f66733b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   32 seconds ago      Running             kube-scheduler            2                   a1519324ed9a7       kube-scheduler-pause-762562
	f9a8dac79d64a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   53 seconds ago      Exited              coredns                   1                   71ac4f78242c2       coredns-66bc5c9577-9pqwk
	5997f05643c28       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   54 seconds ago      Exited              kube-proxy                1                   9228cd5ff0c1e       kube-proxy-v544h
	009491ddb7bb3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   54 seconds ago      Exited              kube-scheduler            1                   a1519324ed9a7       kube-scheduler-pause-762562
	afcb036ebb5bf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   54 seconds ago      Exited              etcd                      1                   ef8699ca9248b       etcd-pause-762562
	300685cf73111       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   54 seconds ago      Exited              kube-apiserver            1                   fabda5ffa77a2       kube-apiserver-pause-762562
	3ba8febf0b679       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   55 seconds ago      Exited              kube-controller-manager   1                   79084517cb61a       kube-controller-manager-pause-762562
	
	
	==> coredns [415b84c3bfa0e1948292eb772cd56b6a90ae284b7a57ce6e16b7e41a80d55132] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55967 - 34154 "HINFO IN 5038972822532882445.8099331229069857526. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026342327s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f9a8dac79d64a6b2331367dd077293d75ba46c5c9e4cd6b69080328cafe9203b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45495 - 21436 "HINFO IN 7615697003491596128.4513909049238198017. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032593539s
	
	
	==> describe nodes <==
	Name:               pause-762562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-762562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=77ec36ba275b9c51be897969b75e45bdcce52b4b
	                    minikube.k8s.io/name=pause-762562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T20_44_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 20:44:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-762562
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 20:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 20:46:23 +0000   Thu, 02 Oct 2025 20:44:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 20:46:23 +0000   Thu, 02 Oct 2025 20:44:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 20:46:23 +0000   Thu, 02 Oct 2025 20:44:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 20:46:23 +0000   Thu, 02 Oct 2025 20:44:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.218
	  Hostname:    pause-762562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9e3bed246a94647aa8538c174b56581
	  System UUID:                c9e3bed2-46a9-4647-aa85-38c174b56581
	  Boot ID:                    9d0457cf-5e5d-4040-abdd-4137d21878a4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9pqwk                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m11s
	  kube-system                 etcd-pause-762562                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m18s
	  kube-system                 kube-apiserver-pause-762562             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-controller-manager-pause-762562    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-proxy-v544h                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-scheduler-pause-762562             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m9s               kube-proxy       
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 2m16s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m16s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m16s              kubelet          Node pause-762562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s              kubelet          Node pause-762562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s              kubelet          Node pause-762562 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m15s              kubelet          Node pause-762562 status is now: NodeReady
	  Normal  RegisteredNode           2m12s              node-controller  Node pause-762562 event: Registered Node pause-762562 in Controller
	  Normal  RegisteredNode           47s                node-controller  Node pause-762562 event: Registered Node pause-762562 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-762562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-762562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-762562 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-762562 event: Registered Node pause-762562 in Controller
	
	
	==> dmesg <==
	[Oct 2 20:43] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000052] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002949] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Oct 2 20:44] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082952] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.125217] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.124105] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.147957] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.813558] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.304317] kauditd_printk_skb: 210 callbacks suppressed
	[Oct 2 20:45] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.028446] kauditd_printk_skb: 319 callbacks suppressed
	[Oct 2 20:46] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.530494] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.127975] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.422899] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [afcb036ebb5bffa076fb3ed3c01a6ad15f57a1b6bf0ff3e0807a974f378b7934] <==
	{"level":"warn","ts":"2025-10-02T20:45:52.128677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.149387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.184116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.205570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.228474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.243260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:45:52.346198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32872","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T20:46:01.007352Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T20:46:01.007458Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-762562","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.218:2380"],"advertise-client-urls":["https://192.168.50.218:2379"]}
	{"level":"error","ts":"2025-10-02T20:46:01.007595Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:46:08.010698Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T20:46:08.010804Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:46:08.010827Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4bfeef2bb38c2b5","current-leader-member-id":"d4bfeef2bb38c2b5"}
	{"level":"info","ts":"2025-10-02T20:46:08.010966Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-02T20:46:08.010978Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T20:46:08.011878Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:46:08.011943Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:46:08.011958Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T20:46:08.012007Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.218:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T20:46:08.012036Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.218:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T20:46:08.012044Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.218:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:46:08.017932Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.218:2380"}
	{"level":"error","ts":"2025-10-02T20:46:08.018008Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.218:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T20:46:08.018033Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.218:2380"}
	{"level":"info","ts":"2025-10-02T20:46:08.018040Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-762562","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.218:2380"],"advertise-client-urls":["https://192.168.50.218:2379"]}
	
	
	==> etcd [d30719ae7ea42900cfbe0fc77b7b4bd2b16376da3c1075365ccac7944df39ac0] <==
	{"level":"warn","ts":"2025-10-02T20:46:22.285961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.296259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.313379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.323721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.329236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.338463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.346736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.364088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.372637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.392404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.397103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.405018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.420862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.435123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.436105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.444765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.462863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.465834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.487647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.501315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.509258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.522411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.559470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:22.608918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T20:46:40.197656Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.239831ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14030289155289068186 > lease_revoke:<id:42b599a6ad05316e>","response":"size:29"}
	
	
	==> kernel <==
	 20:46:43 up 2 min,  0 users,  load average: 1.16, 0.64, 0.26
	Linux pause-762562 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [300685cf731113bea75c0e99d7333e6114a478a6aa839901678675fee0ad1427] <==
	W1002 20:46:16.775395       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:16.794440       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:16.825664       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:16.828250       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.048469       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.060341       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.068082       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.109504       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.144419       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.179244       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.194097       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.213974       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.226688       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.249296       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.331118       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.379905       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.534324       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.539921       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.590525       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.737488       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.759085       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.903274       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:17.932254       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:18.026817       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 20:46:18.028235       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f35539b42c1c2e98b24e5dd45cd60f757f290fd51cc42d67ed1e7b9acfbfc63c] <==
	I1002 20:46:23.370375       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 20:46:23.372417       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 20:46:23.372591       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 20:46:23.372699       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 20:46:23.372719       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 20:46:23.374418       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 20:46:23.377822       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 20:46:23.378717       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 20:46:23.378835       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 20:46:23.378887       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 20:46:23.380657       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 20:46:23.382728       1 cache.go:39] Caches are synced for autoregister controller
	E1002 20:46:23.382919       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 20:46:23.395328       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 20:46:23.395385       1 policy_source.go:240] refreshing policies
	I1002 20:46:23.401947       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 20:46:23.986330       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 20:46:24.173945       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 20:46:24.740359       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 20:46:24.787902       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 20:46:24.820331       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:46:24.831503       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 20:46:36.413048       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 20:46:36.416698       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 20:46:36.420852       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1db5c4e94a2caec08957ed7cdca672b18b8d3eedf8fce9829d0f1c03a6c329ea] <==
	I1002 20:46:26.665638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-762562"
	I1002 20:46:26.665810       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 20:46:26.663236       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 20:46:26.665926       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 20:46:26.663254       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 20:46:26.663586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 20:46:26.668003       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:46:26.670239       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 20:46:26.672446       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:46:26.672549       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:46:26.676032       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 20:46:26.678327       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 20:46:26.682649       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 20:46:26.684951       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:46:26.691225       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 20:46:26.699449       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 20:46:26.702726       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 20:46:26.709078       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 20:46:26.711792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:46:26.711806       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:46:26.711811       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:46:26.712844       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 20:46:26.714653       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 20:46:26.715008       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:46:26.726607       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [3ba8febf0b679f49e881c4bed1fd3921cf05675a519a6420d3ca64f946bab5c2] <==
	I1002 20:45:56.463547       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:45:56.464492       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 20:45:56.465621       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 20:45:56.465716       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 20:45:56.470663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 20:45:56.472979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 20:45:56.476365       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 20:45:56.480809       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 20:45:56.480887       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 20:45:56.484091       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 20:45:56.485344       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 20:45:56.487747       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 20:45:56.493260       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 20:45:56.495674       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:45:56.504652       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 20:45:56.504705       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 20:45:56.505182       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 20:45:56.504797       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 20:45:56.504848       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 20:45:56.505610       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 20:45:56.505644       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 20:45:56.504874       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 20:45:56.504723       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 20:45:56.507480       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 20:45:56.519885       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [1e5f90d35eaccdb7460d46238f820bb4caa1dfefde812a76047213d4dedab0c2] <==
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 20:46:13.567798       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 20:46:13.567825       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:46:13.585272       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:46:13.585772       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:46:13.585813       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:46:13.593379       1 config.go:200] "Starting service config controller"
	I1002 20:46:13.593422       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:46:13.593447       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:46:13.593453       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:46:13.593492       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:46:13.593497       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:46:13.594102       1 config.go:309] "Starting node config controller"
	I1002 20:46:13.594112       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:46:13.594118       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:46:13.694037       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:46:13.694113       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 20:46:13.694506       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1002 20:46:18.150636       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": unexpected EOF"
	E1002 20:46:23.312573       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1002 20:46:23.312935       1 reflector.go:205] "Failed to watch" err="nodes \"pause-762562\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 20:46:23.313014       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:46:23.313044       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	
	
	==> kube-proxy [5997f05643c2892a1aec1713d5d5850b5bb8dbb4b405c8f475d6fc045e0a010f] <==
	I1002 20:45:51.070205       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 20:45:53.172811       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 20:45:53.172862       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.218"]
	E1002 20:45:53.172936       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 20:45:53.344518       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1002 20:45:53.345299       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 20:45:53.345368       1 server_linux.go:132] "Using iptables Proxier"
	I1002 20:45:53.376778       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 20:45:53.378062       1 server.go:527] "Version info" version="v1.34.1"
	I1002 20:45:53.378237       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:45:53.385413       1 config.go:200] "Starting service config controller"
	I1002 20:45:53.385509       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 20:45:53.385538       1 config.go:106] "Starting endpoint slice config controller"
	I1002 20:45:53.385554       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 20:45:53.385583       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 20:45:53.385598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 20:45:53.385996       1 config.go:309] "Starting node config controller"
	I1002 20:45:53.386037       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 20:45:53.386046       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 20:45:53.485769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 20:45:53.485806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 20:45:53.485827       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [009491ddb7bb3061642f0e5d54348289db9b2fd14cbacfc127afd594f06ab538] <==
	I1002 20:45:51.795259       1 serving.go:386] Generated self-signed cert in-memory
	W1002 20:45:53.172628       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 20:45:53.172734       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 20:45:53.172757       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 20:45:53.173210       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 20:45:53.211575       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 20:45:53.211636       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 20:45:53.216073       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:45:53.216111       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:45:53.216509       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 20:45:53.216574       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 20:45:53.316546       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:46:00.865499       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 20:46:00.865646       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 20:46:00.865672       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 20:46:00.866245       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 20:46:00.866981       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 20:46:00.867096       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [acc505f66733b27148ffd1296766577b5aae31d48ed7833ecd6c06f0921c5234] <==
	E1002 20:46:20.382694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.50.218:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 20:46:20.386074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.50.218:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:46:20.400041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.50.218:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 20:46:20.435724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.50.218:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 20:46:20.461500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.50.218:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 20:46:20.485977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.50.218:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:46:20.495806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.50.218:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 20:46:20.657725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.50.218:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:46:20.684315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.50.218:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 20:46:20.729348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.50.218:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.50.218:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:46:23.223803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1002 20:46:23.301350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 20:46:23.301479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 20:46:23.301540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 20:46:23.302312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 20:46:23.302667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 20:46:23.303599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 20:46:23.304255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 20:46:23.304341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 20:46:23.304371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 20:46:23.305206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 20:46:23.305255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 20:46:23.305315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 20:46:23.305350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1002 20:46:28.668249       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 20:46:22 pause-762562 kubelet[4177]: E1002 20:46:22.139211    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:22 pause-762562 kubelet[4177]: E1002 20:46:22.139523    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.143333    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.143807    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.144278    4177 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-762562\" not found" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.254310    4177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.435061    4177 kubelet_node_status.go:124] "Node was previously registered" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.435221    4177 kubelet_node_status.go:78] "Successfully registered node" node="pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.435248    4177 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.436969    4177 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.507611    4177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-762562\" already exists" pod="kube-system/etcd-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.507738    4177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.527430    4177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-762562\" already exists" pod="kube-system/kube-apiserver-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.527583    4177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.537799    4177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-762562\" already exists" pod="kube-system/kube-controller-manager-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.537965    4177 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: E1002 20:46:23.549418    4177 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-762562\" already exists" pod="kube-system/kube-scheduler-pause-762562"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.927039    4177 apiserver.go:52] "Watching apiserver"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.958311    4177 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.979650    4177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45b79789-7110-4e85-8a30-4b58f010d5c0-xtables-lock\") pod \"kube-proxy-v544h\" (UID: \"45b79789-7110-4e85-8a30-4b58f010d5c0\") " pod="kube-system/kube-proxy-v544h"
	Oct 02 20:46:23 pause-762562 kubelet[4177]: I1002 20:46:23.979683    4177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45b79789-7110-4e85-8a30-4b58f010d5c0-lib-modules\") pod \"kube-proxy-v544h\" (UID: \"45b79789-7110-4e85-8a30-4b58f010d5c0\") " pod="kube-system/kube-proxy-v544h"
	Oct 02 20:46:30 pause-762562 kubelet[4177]: E1002 20:46:30.093978    4177 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759437990093605642  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 02 20:46:30 pause-762562 kubelet[4177]: E1002 20:46:30.094576    4177 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759437990093605642  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 02 20:46:40 pause-762562 kubelet[4177]: E1002 20:46:40.097544    4177 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759438000096979702  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 02 20:46:40 pause-762562 kubelet[4177]: E1002 20:46:40.097588    4177 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759438000096979702  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-762562 -n pause-762562
helpers_test.go:269: (dbg) Run:  kubectl --context pause-762562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (90.55s)

                                                
                                    

Test pass (281/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 26.8
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 13.79
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.65
22 TestOffline 88.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 205.61
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.53
35 TestAddons/parallel/Registry 18.23
36 TestAddons/parallel/RegistryCreds 0.75
38 TestAddons/parallel/InspektorGadget 6.55
39 TestAddons/parallel/MetricsServer 6.88
41 TestAddons/parallel/CSI 71.18
42 TestAddons/parallel/Headlamp 25.33
43 TestAddons/parallel/CloudSpanner 6.64
44 TestAddons/parallel/LocalPath 56.19
45 TestAddons/parallel/NvidiaDevicePlugin 6.58
46 TestAddons/parallel/Yakd 11.98
48 TestAddons/StoppedEnableDisable 80.75
49 TestCertOptions 42.96
50 TestCertExpiration 309.72
52 TestForceSystemdFlag 71.18
53 TestForceSystemdEnv 47.66
55 TestKVMDriverInstallOrUpdate 1.49
59 TestErrorSpam/setup 40.58
60 TestErrorSpam/start 0.34
61 TestErrorSpam/status 0.76
62 TestErrorSpam/pause 1.69
63 TestErrorSpam/unpause 1.98
64 TestErrorSpam/stop 4.93
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 54.06
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 37.2
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
76 TestFunctional/serial/CacheCmd/cache/add_local 2.34
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 33.53
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.56
87 TestFunctional/serial/LogsFileCmd 1.54
88 TestFunctional/serial/InvalidService 4.78
90 TestFunctional/parallel/ConfigCmd 0.32
91 TestFunctional/parallel/DashboardCmd 18.06
92 TestFunctional/parallel/DryRun 0.28
93 TestFunctional/parallel/InternationalLanguage 0.13
94 TestFunctional/parallel/StatusCmd 0.87
98 TestFunctional/parallel/ServiceCmdConnect 21.57
99 TestFunctional/parallel/AddonsCmd 0.12
100 TestFunctional/parallel/PersistentVolumeClaim 47.56
102 TestFunctional/parallel/SSHCmd 0.37
103 TestFunctional/parallel/CpCmd 1.28
104 TestFunctional/parallel/MySQL 24.42
105 TestFunctional/parallel/FileSync 0.22
106 TestFunctional/parallel/CertSync 1.28
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
114 TestFunctional/parallel/License 0.49
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.46
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
119 TestFunctional/parallel/ImageCommands/ImageBuild 4.7
120 TestFunctional/parallel/ImageCommands/Setup 1.97
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.93
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.57
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.15
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
140 TestFunctional/parallel/ServiceCmd/DeployApp 16.2
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
142 TestFunctional/parallel/ProfileCmd/profile_list 0.45
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
144 TestFunctional/parallel/MountCmd/any-port 9.62
145 TestFunctional/parallel/ServiceCmd/List 1.29
146 TestFunctional/parallel/ServiceCmd/JSONOutput 1.31
147 TestFunctional/parallel/MountCmd/specific-port 1.99
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
149 TestFunctional/parallel/ServiceCmd/Format 0.43
150 TestFunctional/parallel/ServiceCmd/URL 0.61
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
152 TestFunctional/parallel/Version/short 0.08
153 TestFunctional/parallel/Version/components 0.54
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 208.42
162 TestMultiControlPlane/serial/DeployApp 7.5
163 TestMultiControlPlane/serial/PingHostFromPods 1.29
164 TestMultiControlPlane/serial/AddWorkerNode 49.36
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
167 TestMultiControlPlane/serial/CopyFile 13.81
168 TestMultiControlPlane/serial/StopSecondaryNode 84.35
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
170 TestMultiControlPlane/serial/RestartSecondaryNode 35.73
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.31
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 383.7
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.98
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
175 TestMultiControlPlane/serial/StopCluster 238.87
176 TestMultiControlPlane/serial/RestartCluster 100.44
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
178 TestMultiControlPlane/serial/AddSecondaryNode 81.81
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
183 TestJSONOutput/start/Command 80.84
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.8
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.7
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.1
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 85.28
215 TestMountStart/serial/StartWithMountFirst 22.34
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 24.78
218 TestMountStart/serial/VerifyMountSecond 0.36
219 TestMountStart/serial/DeleteFirst 0.7
220 TestMountStart/serial/VerifyMountPostDelete 0.36
221 TestMountStart/serial/Stop 1.31
222 TestMountStart/serial/RestartStopped 20.35
223 TestMountStart/serial/VerifyMountPostStop 0.37
226 TestMultiNode/serial/FreshStart2Nodes 131.76
227 TestMultiNode/serial/DeployApp2Nodes 5.98
228 TestMultiNode/serial/PingHostFrom2Pods 0.82
229 TestMultiNode/serial/AddNode 45.81
230 TestMultiNode/serial/MultiNodeLabels 0.08
231 TestMultiNode/serial/ProfileList 0.62
232 TestMultiNode/serial/CopyFile 7.35
233 TestMultiNode/serial/StopNode 2.47
234 TestMultiNode/serial/StartAfterStop 39.36
235 TestMultiNode/serial/RestartKeepsNodes 303.54
236 TestMultiNode/serial/DeleteNode 2.94
237 TestMultiNode/serial/StopMultiNode 173.46
238 TestMultiNode/serial/RestartMultiNode 119.9
239 TestMultiNode/serial/ValidateNameConflict 40.69
246 TestScheduledStopUnix 113.46
250 TestRunningBinaryUpgrade 164.67
252 TestKubernetesUpgrade 183.74
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestPause/serial/Start 108.43
265 TestNoKubernetes/serial/StartWithK8s 86.63
266 TestNoKubernetes/serial/StartWithStopK8s 7.87
267 TestNoKubernetes/serial/Start 40.6
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
270 TestNoKubernetes/serial/ProfileList 3.04
274 TestNoKubernetes/serial/Stop 1.54
279 TestNetworkPlugins/group/false 3.38
280 TestNoKubernetes/serial/StartNoArgs 27.37
284 TestStoppedBinaryUpgrade/Setup 3.16
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
286 TestStoppedBinaryUpgrade/Upgrade 133.1
288 TestStartStop/group/old-k8s-version/serial/FirstStart 107.99
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
291 TestStartStop/group/embed-certs/serial/FirstStart 94.05
293 TestStartStop/group/no-preload/serial/FirstStart 100.56
294 TestStartStop/group/old-k8s-version/serial/DeployApp 11.35
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
296 TestStartStop/group/old-k8s-version/serial/Stop 84.29
297 TestStartStop/group/embed-certs/serial/DeployApp 10.39
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
299 TestStartStop/group/embed-certs/serial/Stop 88.46
300 TestStartStop/group/no-preload/serial/DeployApp 10.33
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
302 TestStartStop/group/no-preload/serial/Stop 89.22
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/old-k8s-version/serial/SecondStart 45.63
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 105.02
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
308 TestStartStop/group/embed-certs/serial/SecondStart 66.83
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.01
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
311 TestStartStop/group/no-preload/serial/SecondStart 66.67
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.09
313 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
314 TestStartStop/group/old-k8s-version/serial/Pause 3.91
316 TestStartStop/group/newest-cni/serial/FirstStart 54.88
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.01
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.35
320 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
321 TestStartStop/group/embed-certs/serial/Pause 3.41
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.44
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 89.58
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 16.01
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
327 TestStartStop/group/newest-cni/serial/Stop 73.05
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
330 TestStartStop/group/no-preload/serial/Pause 2.91
331 TestNetworkPlugins/group/auto/Start 87.01
332 TestNetworkPlugins/group/kindnet/Start 103.78
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
334 TestStartStop/group/newest-cni/serial/SecondStart 37.82
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.99
337 TestNetworkPlugins/group/auto/KubeletFlags 0.25
338 TestNetworkPlugins/group/auto/NetCatPod 11.3
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
342 TestStartStop/group/newest-cni/serial/Pause 4.28
343 TestNetworkPlugins/group/enable-default-cni/Start 84.54
344 TestNetworkPlugins/group/auto/DNS 0.19
345 TestNetworkPlugins/group/auto/Localhost 0.18
346 TestNetworkPlugins/group/auto/HairPin 0.16
347 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/Start 77.96
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 19.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
351 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
352 TestNetworkPlugins/group/kindnet/DNS 0.21
353 TestNetworkPlugins/group/kindnet/Localhost 0.16
354 TestNetworkPlugins/group/kindnet/HairPin 0.17
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.04
358 TestNetworkPlugins/group/flannel/Start 81.38
359 TestNetworkPlugins/group/custom-flannel/Start 100.6
360 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
361 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.76
362 TestNetworkPlugins/group/calico/ControllerPod 6.05
363 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
366 TestNetworkPlugins/group/calico/KubeletFlags 0.52
367 TestNetworkPlugins/group/calico/NetCatPod 13.17
368 TestNetworkPlugins/group/bridge/Start 86.65
369 TestNetworkPlugins/group/calico/DNS 0.18
370 TestNetworkPlugins/group/calico/Localhost 0.14
371 TestNetworkPlugins/group/calico/HairPin 0.16
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
374 TestNetworkPlugins/group/flannel/NetCatPod 11.25
375 TestNetworkPlugins/group/flannel/DNS 0.16
376 TestNetworkPlugins/group/flannel/Localhost 0.14
377 TestNetworkPlugins/group/flannel/HairPin 0.13
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
380 TestNetworkPlugins/group/custom-flannel/DNS 0.17
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
384 TestNetworkPlugins/group/bridge/NetCatPod 10.25
385 TestNetworkPlugins/group/bridge/DNS 0.15
386 TestNetworkPlugins/group/bridge/Localhost 0.12
387 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (26.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-364492 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-364492 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (26.803146094s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (26.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 19:47:18.563918   13449 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1002 19:47:18.564018   13449 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-364492
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-364492: exit status 85 (58.713207ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-364492 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-364492 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 19:46:51
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:46:51.799826   13461 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:46:51.800069   13461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:46:51.800077   13461 out.go:374] Setting ErrFile to fd 2...
	I1002 19:46:51.800080   13461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:46:51.800271   13461 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	W1002 19:46:51.800392   13461 root.go:315] Error reading config file at /home/jenkins/minikube-integration/21683-9524/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-9524/.minikube/config/config.json: no such file or directory
	I1002 19:46:51.800869   13461 out.go:368] Setting JSON to true
	I1002 19:46:51.801759   13461 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1755,"bootTime":1759432657,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:46:51.801845   13461 start.go:140] virtualization: kvm guest
	I1002 19:46:51.803730   13461 out.go:99] [download-only-364492] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1002 19:46:51.803849   13461 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 19:46:51.803907   13461 notify.go:221] Checking for updates...
	I1002 19:46:51.804968   13461 out.go:171] MINIKUBE_LOCATION=21683
	I1002 19:46:51.806550   13461 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:46:51.807522   13461 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 19:46:51.808416   13461 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 19:46:51.809385   13461 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 19:46:51.810935   13461 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 19:46:51.811165   13461 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:46:52.278908   13461 out.go:99] Using the kvm2 driver based on user configuration
	I1002 19:46:52.278945   13461 start.go:306] selected driver: kvm2
	I1002 19:46:52.278952   13461 start.go:936] validating driver "kvm2" against <nil>
	I1002 19:46:52.279276   13461 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:46:52.279405   13461 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:46:52.294353   13461 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 19:46:52.294387   13461 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:46:52.307639   13461 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 19:46:52.307683   13461 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 19:46:52.308217   13461 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1002 19:46:52.308392   13461 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 19:46:52.308417   13461 cni.go:84] Creating CNI manager for ""
	I1002 19:46:52.308466   13461 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 19:46:52.308476   13461 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 19:46:52.308522   13461 start.go:350] cluster config:
	{Name:download-only-364492 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-364492 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:46:52.308690   13461 iso.go:125] acquiring lock: {Name:mkabc2fb4ac96edf87725f05149cf44e9a15d593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:46:52.310261   13461 out.go:99] Downloading VM boot image ...
	I1002 19:46:52.310289   13461 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21683-9524/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1002 19:47:03.992160   13461 out.go:99] Starting "download-only-364492" primary control-plane node in "download-only-364492" cluster
	I1002 19:47:03.992218   13461 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 19:47:04.099505   13461 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1002 19:47:04.099542   13461 cache.go:59] Caching tarball of preloaded images
	I1002 19:47:04.099743   13461 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 19:47:04.101410   13461 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 19:47:04.101433   13461 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 19:47:04.426275   13461 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1002 19:47:04.426401   13461 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-364492 host does not exist
	  To start a cluster, run: "minikube start -p download-only-364492"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-364492
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (13.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-586534 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-586534 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (13.787888941s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (13.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 19:47:32.706835   13449 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 19:47:32.706881   13449 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-586534
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-586534: exit status 85 (62.034176ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-364492 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-364492 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │ 02 Oct 25 19:47 UTC │
	│ delete  │ -p download-only-364492                                                                                                                                                                             │ download-only-364492 │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │ 02 Oct 25 19:47 UTC │
	│ start   │ -o=json --download-only -p download-only-586534 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-586534 │ jenkins │ v1.37.0 │ 02 Oct 25 19:47 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 19:47:18
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:47:18.964869   13730 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:47:18.965089   13730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:47:18.965097   13730 out.go:374] Setting ErrFile to fd 2...
	I1002 19:47:18.965101   13730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:47:18.965274   13730 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 19:47:18.965751   13730 out.go:368] Setting JSON to true
	I1002 19:47:18.966525   13730 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1782,"bootTime":1759432657,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:47:18.966623   13730 start.go:140] virtualization: kvm guest
	I1002 19:47:18.968419   13730 out.go:99] [download-only-586534] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 19:47:18.968600   13730 notify.go:221] Checking for updates...
	I1002 19:47:18.969646   13730 out.go:171] MINIKUBE_LOCATION=21683
	I1002 19:47:18.970997   13730 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:47:18.972281   13730 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 19:47:18.973456   13730 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 19:47:18.974556   13730 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 19:47:18.976689   13730 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 19:47:18.976983   13730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:47:19.013351   13730 out.go:99] Using the kvm2 driver based on user configuration
	I1002 19:47:19.013402   13730 start.go:306] selected driver: kvm2
	I1002 19:47:19.013411   13730 start.go:936] validating driver "kvm2" against <nil>
	I1002 19:47:19.013786   13730 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:47:19.013876   13730 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:47:19.028772   13730 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 19:47:19.028805   13730 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-9524/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:47:19.043424   13730 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1002 19:47:19.043474   13730 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 19:47:19.044053   13730 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1002 19:47:19.044202   13730 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 19:47:19.044227   13730 cni.go:84] Creating CNI manager for ""
	I1002 19:47:19.044289   13730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1002 19:47:19.044304   13730 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 19:47:19.044353   13730 start.go:350] cluster config:
	{Name:download-only-586534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-586534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:47:19.044444   13730 iso.go:125] acquiring lock: {Name:mkabc2fb4ac96edf87725f05149cf44e9a15d593 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:47:19.045892   13730 out.go:99] Starting "download-only-586534" primary control-plane node in "download-only-586534" cluster
	I1002 19:47:19.045912   13730 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 19:47:19.538872   13730 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 19:47:19.538906   13730 cache.go:59] Caching tarball of preloaded images
	I1002 19:47:19.539077   13730 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 19:47:19.540787   13730 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1002 19:47:19.540810   13730 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 19:47:19.651862   13730 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1002 19:47:19.651923   13730 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21683-9524/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-586534 host does not exist
	  To start a cluster, run: "minikube start -p download-only-586534"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-586534
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 19:47:33.309004   13449 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-418628 --alsologtostderr --binary-mirror http://127.0.0.1:33167 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-418628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-418628
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (88.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-529925 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-529925 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.665429186s)
helpers_test.go:175: Cleaning up "offline-crio-529925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-529925
--- PASS: TestOffline (88.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-355008
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-355008: exit status 85 (52.378492ms)

                                                
                                                
-- stdout --
	* Profile "addons-355008" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-355008"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-355008
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-355008: exit status 85 (51.626506ms)

                                                
                                                
-- stdout --
	* Profile "addons-355008" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-355008"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (205.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-355008 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-355008 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m25.609279367s)
--- PASS: TestAddons/Setup (205.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-355008 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-355008 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-355008 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-355008 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [db3e6724-fe44-444c-92cc-4c9f950e8e37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [db3e6724-fe44-444c-92cc-4c9f950e8e37] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004214261s
addons_test.go:694: (dbg) Run:  kubectl --context addons-355008 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-355008 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-355008 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 17.470008ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-qxggg" [57164eed-554b-4faf-b980-8d2bec0591e2] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008034102s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-ms5bg" [e3c4f852-8a22-497f-9878-10e84254ec7b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00509846s
addons_test.go:392: (dbg) Run:  kubectl --context addons-355008 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-355008 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-355008 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.379622854s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 ip
2025/10/02 19:51:36 [DEBUG] GET http://192.168.39.211:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.23s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.341221ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-355008
addons_test.go:332: (dbg) Run:  kubectl --context addons-355008 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-l8bhx" [fb92b572-893f-40b4-a330-fd17d21d0ff0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005376242s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.55s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 17.559247ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-tgmcd" [4ed74aa4-2561-4444-bce0-b2d4ab76154f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006872054s
addons_test.go:463: (dbg) Run:  kubectl --context addons-355008 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (71.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 19:51:50.487461   13449 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 19:51:50.493149   13449 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 19:51:50.493172   13449 kapi.go:107] duration metric: took 5.717787ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.727215ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-355008 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-355008 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [112c99ef-47ae-4b38-9504-2014c62c4494] Pending
helpers_test.go:352: "task-pv-pod" [112c99ef-47ae-4b38-9504-2014c62c4494] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [112c99ef-47ae-4b38-9504-2014c62c4494] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.005067009s
addons_test.go:572: (dbg) Run:  kubectl --context addons-355008 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-355008 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-355008 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-355008 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-355008 delete pod task-pv-pod: (1.100378903s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-355008 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-355008 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-355008 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4b1a5eee-572e-4e7d-b0d5-553d9eea2de0] Pending
helpers_test.go:352: "task-pv-pod-restore" [4b1a5eee-572e-4e7d-b0d5-553d9eea2de0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4b1a5eee-572e-4e7d-b0d5-553d9eea2de0] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004756635s
addons_test.go:614: (dbg) Run:  kubectl --context addons-355008 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-355008 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-355008 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-355008 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.05509452s)
--- PASS: TestAddons/parallel/CSI (71.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-355008 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-355008 --alsologtostderr -v=1: (1.151974195s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-n7rn2" [d3e27d39-209d-4533-a7af-3fe6406db58c] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-n7rn2" [d3e27d39-209d-4533-a7af-3fe6406db58c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-n7rn2" [d3e27d39-209d-4533-a7af-3fe6406db58c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.006367311s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-355008 addons disable headlamp --alsologtostderr -v=1: (6.17508554s)
--- PASS: TestAddons/parallel/Headlamp (25.33s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-2t6kr" [fa072236-21c9-47df-ae4c-451aa5288bdd] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004803134s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-355008 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-355008 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-355008 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c7c3d4d8-4573-4235-8ffd-2e1326704f83] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c7c3d4d8-4573-4235-8ffd-2e1326704f83] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c7c3d4d8-4573-4235-8ffd-2e1326704f83] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004685077s
addons_test.go:967: (dbg) Run:  kubectl --context addons-355008 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 ssh "cat /opt/local-path-provisioner/pvc-a708a7f0-6298-4a5f-9828-b66cc225a095_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-355008 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-355008 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-355008 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.276588419s)
--- PASS: TestAddons/parallel/LocalPath (56.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-74jk8" [1ee77706-ccb3-4b2a-a745-fb66b3b18f87] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004446497s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-stsls" [9d30c719-6952-4337-9282-35f33db513cf] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004635758s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-355008 addons disable yakd --alsologtostderr -v=1: (5.971772059s)
--- PASS: TestAddons/parallel/Yakd (11.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (80.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-355008
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-355008: (1m20.477181377s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-355008
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-355008
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-355008
--- PASS: TestAddons/StoppedEnableDisable (80.75s)

                                                
                                    
x
+
TestCertOptions (42.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-998413 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-998413 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.606663507s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-998413 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-998413 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-998413 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-998413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-998413
--- PASS: TestCertOptions (42.96s)

                                                
                                    
x
+
TestCertExpiration (309.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-491886 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-491886 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.895558673s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-491886 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-491886 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.500785436s)
helpers_test.go:175: Cleaning up "cert-expiration-491886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-491886
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-491886: (1.32739044s)
--- PASS: TestCertExpiration (309.72s)

                                                
                                    
x
+
TestForceSystemdFlag (71.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-547667 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-547667 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m10.0018882s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-547667 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-547667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-547667
--- PASS: TestForceSystemdFlag (71.18s)

                                                
                                    
x
+
TestForceSystemdEnv (47.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-815066 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-815066 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.803710692s)
helpers_test.go:175: Cleaning up "force-systemd-env-815066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-815066
--- PASS: TestForceSystemdEnv (47.66s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.49s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1002 20:45:52.312346   13449 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1002 20:45:52.312522   13449 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3609489695/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1002 20:45:52.353471   13449 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3609489695/001/docker-machine-driver-kvm2 version is 1.1.1
W1002 20:45:52.353524   13449 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1002 20:45:52.353692   13449 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1002 20:45:52.353762   13449 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3609489695/001/docker-machine-driver-kvm2
I1002 20:45:53.654168   13449 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3609489695/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1002 20:45:53.675260   13449 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3609489695/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.49s)

                                                
                                    
x
+
TestErrorSpam/setup (40.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-193804 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-193804 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-193804 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-193804 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.574984503s)
--- PASS: TestErrorSpam/setup (40.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 status
E1002 19:56:00.270336   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:56:00.276698   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 status
E1002 19:56:00.289015   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:56:00.310801   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:56:00.352232   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:56:00.433649   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 status
E1002 19:56:00.595290   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 pause
E1002 19:56:00.916878   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 pause
E1002 19:56:01.558628   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 unpause
E1002 19:56:02.840430   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 unpause
--- PASS: TestErrorSpam/unpause (1.98s)

                                                
                                    
x
+
TestErrorSpam/stop (4.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 stop
E1002 19:56:05.402382   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 stop: (2.164713025s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 stop: (1.249596526s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-193804 --log_dir /tmp/nospam-193804 stop: (1.515167784s)
--- PASS: TestErrorSpam/stop (4.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-9524/.minikube/files/etc/test/nested/copy/13449/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-527118 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 19:56:10.524369   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:56:20.766511   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 19:56:41.247918   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-527118 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.061815863s)
--- PASS: TestFunctional/serial/StartWithProxy (54.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 19:57:03.915375   13449 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-527118 --alsologtostderr -v=8
E1002 19:57:22.209416   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-527118 --alsologtostderr -v=8: (37.203394991s)
functional_test.go:678: soft start took 37.204106449s for "functional-527118" cluster.
I1002 19:57:41.119075   13449 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (37.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-527118 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 cache add registry.k8s.io/pause:3.1: (1.173845258s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 cache add registry.k8s.io/pause:3.3: (1.140649761s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 cache add registry.k8s.io/pause:latest: (1.136280967s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-527118 /tmp/TestFunctionalserialCacheCmdcacheadd_local2863953713/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 cache add minikube-local-cache-test:functional-527118
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 cache add minikube-local-cache-test:functional-527118: (2.004987605s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 cache delete minikube-local-cache-test:functional-527118
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-527118
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.617033ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 cache reload: (1.006539746s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 kubectl -- --context functional-527118 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-527118 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-527118 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-527118 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.531951904s)
functional_test.go:776: restart took 33.532075873s for "functional-527118" cluster.
I1002 19:58:22.906491   13449 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (33.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-527118 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 logs: (1.559489775s)
--- PASS: TestFunctional/serial/LogsCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 logs --file /tmp/TestFunctionalserialLogsFileCmd2191428681/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 logs --file /tmp/TestFunctionalserialLogsFileCmd2191428681/001/logs.txt: (1.538585438s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-527118 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-527118
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-527118: exit status 115 (275.1063ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.4:31407 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-527118 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-527118 delete -f testdata/invalidsvc.yaml: (1.251208163s)
--- PASS: TestFunctional/serial/InvalidService (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 config get cpus: exit status 14 (54.429093ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 config get cpus: exit status 14 (48.414358ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-527118 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-527118 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 21894: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-527118 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-527118 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (141.682313ms)

                                                
                                                
-- stdout --
	* [functional-527118] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 19:58:56.627927   21764 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:58:56.628187   21764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:58:56.628197   21764 out.go:374] Setting ErrFile to fd 2...
	I1002 19:58:56.628201   21764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:58:56.628375   21764 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 19:58:56.628819   21764 out.go:368] Setting JSON to false
	I1002 19:58:56.629699   21764 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2480,"bootTime":1759432657,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:58:56.629865   21764 start.go:140] virtualization: kvm guest
	I1002 19:58:56.631918   21764 out.go:179] * [functional-527118] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 19:58:56.633375   21764 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 19:58:56.633401   21764 notify.go:221] Checking for updates...
	I1002 19:58:56.636482   21764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:58:56.637746   21764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 19:58:56.639393   21764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 19:58:56.644004   21764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 19:58:56.645381   21764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:58:56.647058   21764 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 19:58:56.647800   21764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:58:56.647897   21764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:58:56.665876   21764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45167
	I1002 19:58:56.666380   21764 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:58:56.666886   21764 main.go:141] libmachine: Using API Version  1
	I1002 19:58:56.666904   21764 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:58:56.667201   21764 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:58:56.667392   21764 main.go:141] libmachine: (functional-527118) Calling .DriverName
	I1002 19:58:56.667783   21764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:58:56.668294   21764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:58:56.668345   21764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:58:56.681109   21764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I1002 19:58:56.681485   21764 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:58:56.681953   21764 main.go:141] libmachine: Using API Version  1
	I1002 19:58:56.681974   21764 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:58:56.682278   21764 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:58:56.682460   21764 main.go:141] libmachine: (functional-527118) Calling .DriverName
	I1002 19:58:56.715139   21764 out.go:179] * Using the kvm2 driver based on existing profile
	I1002 19:58:56.716341   21764 start.go:306] selected driver: kvm2
	I1002 19:58:56.716358   21764 start.go:936] validating driver "kvm2" against &{Name:functional-527118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-527118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:58:56.716505   21764 start.go:947] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:58:56.718513   21764 out.go:203] 
	W1002 19:58:56.719613   21764 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 19:58:56.720708   21764 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-527118 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-527118 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-527118 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (132.936026ms)

                                                
                                                
-- stdout --
	* [functional-527118] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 19:58:56.496356   21725 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:58:56.496455   21725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:58:56.496466   21725 out.go:374] Setting ErrFile to fd 2...
	I1002 19:58:56.496473   21725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:58:56.496822   21725 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 19:58:56.497297   21725 out.go:368] Setting JSON to false
	I1002 19:58:56.498235   21725 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2479,"bootTime":1759432657,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:58:56.498332   21725 start.go:140] virtualization: kvm guest
	I1002 19:58:56.500072   21725 out.go:179] * [functional-527118] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 19:58:56.501240   21725 notify.go:221] Checking for updates...
	I1002 19:58:56.501297   21725 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 19:58:56.502382   21725 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:58:56.503596   21725 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 19:58:56.504948   21725 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 19:58:56.506046   21725 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 19:58:56.507116   21725 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:58:56.508583   21725 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 19:58:56.509353   21725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:58:56.509413   21725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:58:56.523946   21725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I1002 19:58:56.524398   21725 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:58:56.524941   21725 main.go:141] libmachine: Using API Version  1
	I1002 19:58:56.524970   21725 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:58:56.525401   21725 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:58:56.525571   21725 main.go:141] libmachine: (functional-527118) Calling .DriverName
	I1002 19:58:56.525846   21725 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:58:56.526128   21725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 19:58:56.526164   21725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:58:56.539386   21725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34699
	I1002 19:58:56.539909   21725 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:58:56.540432   21725 main.go:141] libmachine: Using API Version  1
	I1002 19:58:56.540460   21725 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:58:56.540824   21725 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:58:56.540978   21725 main.go:141] libmachine: (functional-527118) Calling .DriverName
	I1002 19:58:56.572358   21725 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1002 19:58:56.573576   21725 start.go:306] selected driver: kvm2
	I1002 19:58:56.573593   21725 start.go:936] validating driver "kvm2" against &{Name:functional-527118 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-527118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:58:56.573742   21725 start.go:947] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:58:56.575875   21725 out.go:203] 
	W1002 19:58:56.576963   21725 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 19:58:56.578100   21725 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-527118 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-527118 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-64txb" [94293803-39c6-487e-b090-430b02edbafd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-64txb" [94293803-39c6-487e-b090-430b02edbafd] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.005062009s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.4:31905
functional_test.go:1680: http://192.168.39.4:31905: success! body:
Request served by hello-node-connect-7d85dfc575-64txb

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.4:31905
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [6c63cc97-6eb1-4a45-9f34-27cfd39e9980] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006497767s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-527118 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-527118 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-527118 get pvc myclaim -o=json
I1002 19:58:37.835255   13449 retry.go:31] will retry after 1.643527802s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8e7ca628-c3cc-413b-90f9-5d65867dd02e ResourceVersion:689 Generation:0 CreationTimestamp:2025-10-02 19:58:37 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a8e7f0 VolumeMode:0xc001a8e800 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-527118 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-527118 apply -f testdata/storage-provisioner/pod.yaml
I1002 19:58:39.860033   13449 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [213e0e07-a4bf-4f33-b0b5-5fe8bc2ef905] Pending
helpers_test.go:352: "sp-pod" [213e0e07-a4bf-4f33-b0b5-5fe8bc2ef905] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1002 19:58:44.130973   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [213e0e07-a4bf-4f33-b0b5-5fe8bc2ef905] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004857224s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-527118 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-527118 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-527118 apply -f testdata/storage-provisioner/pod.yaml
I1002 19:59:03.938565   13449 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ea5ef6ca-6c36-4969-a2ab-bbd18f0ebd02] Pending
helpers_test.go:352: "sp-pod" [ea5ef6ca-6c36-4969-a2ab-bbd18f0ebd02] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ea5ef6ca-6c36-4969-a2ab-bbd18f0ebd02] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.005124622s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-527118 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh -n functional-527118 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 cp functional-527118:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3870294176/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh -n functional-527118 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh -n functional-527118 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-527118 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-8dxkj" [b70cb1bf-ed71-4a3a-990e-8c5156677b73] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-8dxkj" [b70cb1bf-ed71-4a3a-990e-8c5156677b73] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.01173428s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-527118 exec mysql-5bb876957f-8dxkj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-527118 exec mysql-5bb876957f-8dxkj -- mysql -ppassword -e "show databases;": exit status 1 (531.986448ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1002 19:58:51.917096   13449 retry.go:31] will retry after 1.236355264s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-527118 exec mysql-5bb876957f-8dxkj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-527118 exec mysql-5bb876957f-8dxkj -- mysql -ppassword -e "show databases;": exit status 1 (167.195214ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1002 19:58:53.321905   13449 retry.go:31] will retry after 2.083759225s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-527118 exec mysql-5bb876957f-8dxkj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/13449/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo cat /etc/test/nested/copy/13449/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/13449.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo cat /etc/ssl/certs/13449.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/13449.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo cat /usr/share/ca-certificates/13449.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/134492.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo cat /etc/ssl/certs/134492.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/134492.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo cat /usr/share/ca-certificates/134492.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-527118 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 ssh "sudo systemctl is-active docker": exit status 1 (283.341292ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 ssh "sudo systemctl is-active containerd": exit status 1 (234.680418ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-527118 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-527118
localhost/kicbase/echo-server:functional-527118
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-527118 image ls --format short --alsologtostderr:
I1002 19:59:08.796811   22718 out.go:360] Setting OutFile to fd 1 ...
I1002 19:59:08.796975   22718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:08.796986   22718 out.go:374] Setting ErrFile to fd 2...
I1002 19:59:08.796993   22718 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:08.797367   22718 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
I1002 19:59:08.798294   22718 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:08.798447   22718 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:08.798999   22718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:08.799095   22718 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:08.815374   22718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
I1002 19:59:08.815913   22718 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:08.816417   22718 main.go:141] libmachine: Using API Version  1
I1002 19:59:08.816439   22718 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:08.816881   22718 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:08.817087   22718 main.go:141] libmachine: (functional-527118) Calling .GetState
I1002 19:59:08.819472   22718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:08.819527   22718 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:08.834246   22718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
I1002 19:59:08.834712   22718 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:08.835239   22718 main.go:141] libmachine: Using API Version  1
I1002 19:59:08.835263   22718 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:08.835660   22718 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:08.835931   22718 main.go:141] libmachine: (functional-527118) Calling .DriverName
I1002 19:59:08.836142   22718 ssh_runner.go:195] Run: systemctl --version
I1002 19:59:08.836171   22718 main.go:141] libmachine: (functional-527118) Calling .GetSSHHostname
I1002 19:59:08.840543   22718 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:08.840973   22718 main.go:141] libmachine: (functional-527118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:85:47", ip: ""} in network mk-functional-527118: {Iface:virbr1 ExpiryTime:2025-10-02 20:56:25 +0000 UTC Type:0 Mac:52:54:00:9e:85:47 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-527118 Clientid:01:52:54:00:9e:85:47}
I1002 19:59:08.841011   22718 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined IP address 192.168.39.4 and MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:08.841218   22718 main.go:141] libmachine: (functional-527118) Calling .GetSSHPort
I1002 19:59:08.841404   22718 main.go:141] libmachine: (functional-527118) Calling .GetSSHKeyPath
I1002 19:59:08.841550   22718 main.go:141] libmachine: (functional-527118) Calling .GetSSHUsername
I1002 19:59:08.841701   22718 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/functional-527118/id_rsa Username:docker}
I1002 19:59:08.937593   22718 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 19:59:09.037960   22718 main.go:141] libmachine: Making call to close driver server
I1002 19:59:09.037978   22718 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:09.038289   22718 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:09.038309   22718 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:59:09.038325   22718 main.go:141] libmachine: Making call to close driver server
I1002 19:59:09.038333   22718 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:09.038591   22718 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:09.038608   22718 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:59:09.038662   22718 main.go:141] libmachine: (functional-527118) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-527118 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 203ad09fc1566 │ 197MB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-527118  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-527118  │ 03af4c6f8b2e6 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-527118 image ls --format table --alsologtostderr:
I1002 19:59:09.461793   22841 out.go:360] Setting OutFile to fd 1 ...
I1002 19:59:09.461955   22841 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:09.461968   22841 out.go:374] Setting ErrFile to fd 2...
I1002 19:59:09.461974   22841 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:09.462272   22841 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
I1002 19:59:09.463181   22841 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:09.463343   22841 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:09.463965   22841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:09.464072   22841 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:09.478269   22841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
I1002 19:59:09.478820   22841 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:09.479489   22841 main.go:141] libmachine: Using API Version  1
I1002 19:59:09.479511   22841 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:09.479891   22841 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:09.480097   22841 main.go:141] libmachine: (functional-527118) Calling .GetState
I1002 19:59:09.482213   22841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:09.482253   22841 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:09.495805   22841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
I1002 19:59:09.496322   22841 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:09.496830   22841 main.go:141] libmachine: Using API Version  1
I1002 19:59:09.496853   22841 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:09.497211   22841 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:09.497382   22841 main.go:141] libmachine: (functional-527118) Calling .DriverName
I1002 19:59:09.497584   22841 ssh_runner.go:195] Run: systemctl --version
I1002 19:59:09.497615   22841 main.go:141] libmachine: (functional-527118) Calling .GetSSHHostname
I1002 19:59:09.500816   22841 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:09.501258   22841 main.go:141] libmachine: (functional-527118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:85:47", ip: ""} in network mk-functional-527118: {Iface:virbr1 ExpiryTime:2025-10-02 20:56:25 +0000 UTC Type:0 Mac:52:54:00:9e:85:47 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-527118 Clientid:01:52:54:00:9e:85:47}
I1002 19:59:09.501289   22841 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined IP address 192.168.39.4 and MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:09.501440   22841 main.go:141] libmachine: (functional-527118) Calling .GetSSHPort
I1002 19:59:09.501600   22841 main.go:141] libmachine: (functional-527118) Calling .GetSSHKeyPath
I1002 19:59:09.501798   22841 main.go:141] libmachine: (functional-527118) Calling .GetSSHUsername
I1002 19:59:09.501951   22841 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/functional-527118/id_rsa Username:docker}
I1002 19:59:09.605707   22841 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 19:59:09.861782   22841 main.go:141] libmachine: Making call to close driver server
I1002 19:59:09.861807   22841 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:09.862081   22841 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:09.862098   22841 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:59:09.862126   22841 main.go:141] libmachine: Making call to close driver server
I1002 19:59:09.862127   22841 main.go:141] libmachine: (functional-527118) DBG | Closing plugin on server side
I1002 19:59:09.862137   22841 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:09.862432   22841 main.go:141] libmachine: (functional-527118) DBG | Closing plugin on server side
I1002 19:59:09.862430   22841 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:09.862474   22841 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-527118 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"203ad09fc1566a329c1d2af8d1f219b28fd2c00b69e743bd572b7f662365432d","repoDigests":["docker.io/library/nginx@sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c","docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc"],"repoTags":["docker.io/library/nginx:latest"],"size":"196550530"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
"repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDi
gests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553
d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:
127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-527118"],"size":"4943877"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0
f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"03af4c6f8b2e62efcf8958d00ed2d473858fb6f4a25f056a581184a605eb6aba","repoDigests":["localhost/minikube-local-cache-test@sha256:6e4f31d88b8288b179caa35426995fad4759cd824c3d24685bb9869b8890678c"],"repoTags":["localhost/minikube-local-cache-test:functional-527118"],"size":"3328"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a
13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-527118 image ls --format json --alsologtostderr:
I1002 19:59:09.127639   22774 out.go:360] Setting OutFile to fd 1 ...
I1002 19:59:09.127921   22774 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:09.127929   22774 out.go:374] Setting ErrFile to fd 2...
I1002 19:59:09.127935   22774 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:09.128190   22774 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
I1002 19:59:09.128814   22774 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:09.128935   22774 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:09.129305   22774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:09.129374   22774 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:09.142392   22774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
I1002 19:59:09.142851   22774 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:09.143364   22774 main.go:141] libmachine: Using API Version  1
I1002 19:59:09.143383   22774 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:09.143972   22774 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:09.144191   22774 main.go:141] libmachine: (functional-527118) Calling .GetState
I1002 19:59:09.146328   22774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:09.146373   22774 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:09.160610   22774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
I1002 19:59:09.161102   22774 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:09.161551   22774 main.go:141] libmachine: Using API Version  1
I1002 19:59:09.161572   22774 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:09.161936   22774 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:09.162119   22774 main.go:141] libmachine: (functional-527118) Calling .DriverName
I1002 19:59:09.162348   22774 ssh_runner.go:195] Run: systemctl --version
I1002 19:59:09.162378   22774 main.go:141] libmachine: (functional-527118) Calling .GetSSHHostname
I1002 19:59:09.166037   22774 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:09.166619   22774 main.go:141] libmachine: (functional-527118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:85:47", ip: ""} in network mk-functional-527118: {Iface:virbr1 ExpiryTime:2025-10-02 20:56:25 +0000 UTC Type:0 Mac:52:54:00:9e:85:47 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-527118 Clientid:01:52:54:00:9e:85:47}
I1002 19:59:09.166652   22774 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined IP address 192.168.39.4 and MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:09.166928   22774 main.go:141] libmachine: (functional-527118) Calling .GetSSHPort
I1002 19:59:09.167140   22774 main.go:141] libmachine: (functional-527118) Calling .GetSSHKeyPath
I1002 19:59:09.167316   22774 main.go:141] libmachine: (functional-527118) Calling .GetSSHUsername
I1002 19:59:09.167487   22774 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/functional-527118/id_rsa Username:docker}
I1002 19:59:09.249976   22774 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 19:59:09.299869   22774 main.go:141] libmachine: Making call to close driver server
I1002 19:59:09.299882   22774 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:09.300170   22774 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:09.300190   22774 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:59:09.300200   22774 main.go:141] libmachine: Making call to close driver server
I1002 19:59:09.300208   22774 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:09.300242   22774 main.go:141] libmachine: (functional-527118) DBG | Closing plugin on server side
I1002 19:59:09.300458   22774 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:09.300477   22774 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:59:09.300485   22774 main.go:141] libmachine: (functional-527118) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-527118 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 203ad09fc1566a329c1d2af8d1f219b28fd2c00b69e743bd572b7f662365432d
repoDigests:
- docker.io/library/nginx@sha256:17ae566734b63632e543c907ba74757e0c1a25d812ab9f10a07a6bed98dd199c
- docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc
repoTags:
- docker.io/library/nginx:latest
size: "196550530"
- id: 03af4c6f8b2e62efcf8958d00ed2d473858fb6f4a25f056a581184a605eb6aba
repoDigests:
- localhost/minikube-local-cache-test@sha256:6e4f31d88b8288b179caa35426995fad4759cd824c3d24685bb9869b8890678c
repoTags:
- localhost/minikube-local-cache-test:functional-527118
size: "3328"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-527118
size: "4943877"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-527118 image ls --format yaml --alsologtostderr:
I1002 19:59:08.798899   22719 out.go:360] Setting OutFile to fd 1 ...
I1002 19:59:08.799195   22719 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:08.799207   22719 out.go:374] Setting ErrFile to fd 2...
I1002 19:59:08.799214   22719 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:08.799540   22719 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
I1002 19:59:08.800163   22719 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:08.800303   22719 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:08.800840   22719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:08.800913   22719 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:08.815347   22719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
I1002 19:59:08.815850   22719 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:08.816473   22719 main.go:141] libmachine: Using API Version  1
I1002 19:59:08.816506   22719 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:08.816896   22719 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:08.817130   22719 main.go:141] libmachine: (functional-527118) Calling .GetState
I1002 19:59:08.819604   22719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:08.819659   22719 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:08.835892   22719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
I1002 19:59:08.836340   22719 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:08.836943   22719 main.go:141] libmachine: Using API Version  1
I1002 19:59:08.836972   22719 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:08.837407   22719 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:08.837633   22719 main.go:141] libmachine: (functional-527118) Calling .DriverName
I1002 19:59:08.837863   22719 ssh_runner.go:195] Run: systemctl --version
I1002 19:59:08.837895   22719 main.go:141] libmachine: (functional-527118) Calling .GetSSHHostname
I1002 19:59:08.842145   22719 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:08.842663   22719 main.go:141] libmachine: (functional-527118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:85:47", ip: ""} in network mk-functional-527118: {Iface:virbr1 ExpiryTime:2025-10-02 20:56:25 +0000 UTC Type:0 Mac:52:54:00:9e:85:47 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-527118 Clientid:01:52:54:00:9e:85:47}
I1002 19:59:08.842697   22719 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined IP address 192.168.39.4 and MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:08.842938   22719 main.go:141] libmachine: (functional-527118) Calling .GetSSHPort
I1002 19:59:08.843142   22719 main.go:141] libmachine: (functional-527118) Calling .GetSSHKeyPath
I1002 19:59:08.843323   22719 main.go:141] libmachine: (functional-527118) Calling .GetSSHUsername
I1002 19:59:08.843474   22719 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/functional-527118/id_rsa Username:docker}
I1002 19:59:08.961186   22719 ssh_runner.go:195] Run: sudo crictl images --output json
I1002 19:59:09.068608   22719 main.go:141] libmachine: Making call to close driver server
I1002 19:59:09.068624   22719 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:09.068944   22719 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:09.068969   22719 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:59:09.068978   22719 main.go:141] libmachine: (functional-527118) DBG | Closing plugin on server side
I1002 19:59:09.068992   22719 main.go:141] libmachine: Making call to close driver server
I1002 19:59:09.069006   22719 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:09.069207   22719 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:09.069221   22719 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:59:09.069320   22719 main.go:141] libmachine: (functional-527118) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 ssh pgrep buildkitd: exit status 1 (217.552382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image build -t localhost/my-image:functional-527118 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 image build -t localhost/my-image:functional-527118 testdata/build --alsologtostderr: (4.258370911s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-527118 image build -t localhost/my-image:functional-527118 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e83b1c1c5fa
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-527118
--> 1ff1970a2d1
Successfully tagged localhost/my-image:functional-527118
1ff1970a2d174a27b55ebb8b1d2e24b42cf2d914f7b4fa915d4a1dcd8b2dae7c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-527118 image build -t localhost/my-image:functional-527118 testdata/build --alsologtostderr:
I1002 19:59:09.308752   22817 out.go:360] Setting OutFile to fd 1 ...
I1002 19:59:09.309081   22817 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:09.309092   22817 out.go:374] Setting ErrFile to fd 2...
I1002 19:59:09.309097   22817 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 19:59:09.309307   22817 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
I1002 19:59:09.309927   22817 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:09.310644   22817 config.go:182] Loaded profile config "functional-527118": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 19:59:09.311084   22817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:09.311125   22817 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:09.324960   22817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
I1002 19:59:09.325468   22817 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:09.326015   22817 main.go:141] libmachine: Using API Version  1
I1002 19:59:09.326042   22817 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:09.326474   22817 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:09.326688   22817 main.go:141] libmachine: (functional-527118) Calling .GetState
I1002 19:59:09.328860   22817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1002 19:59:09.328904   22817 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:59:09.342402   22817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
I1002 19:59:09.342816   22817 main.go:141] libmachine: () Calling .GetVersion
I1002 19:59:09.343208   22817 main.go:141] libmachine: Using API Version  1
I1002 19:59:09.343227   22817 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:59:09.343565   22817 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:59:09.343782   22817 main.go:141] libmachine: (functional-527118) Calling .DriverName
I1002 19:59:09.343964   22817 ssh_runner.go:195] Run: systemctl --version
I1002 19:59:09.343987   22817 main.go:141] libmachine: (functional-527118) Calling .GetSSHHostname
I1002 19:59:09.346846   22817 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:09.347266   22817 main.go:141] libmachine: (functional-527118) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:85:47", ip: ""} in network mk-functional-527118: {Iface:virbr1 ExpiryTime:2025-10-02 20:56:25 +0000 UTC Type:0 Mac:52:54:00:9e:85:47 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-527118 Clientid:01:52:54:00:9e:85:47}
I1002 19:59:09.347290   22817 main.go:141] libmachine: (functional-527118) DBG | domain functional-527118 has defined IP address 192.168.39.4 and MAC address 52:54:00:9e:85:47 in network mk-functional-527118
I1002 19:59:09.347487   22817 main.go:141] libmachine: (functional-527118) Calling .GetSSHPort
I1002 19:59:09.347701   22817 main.go:141] libmachine: (functional-527118) Calling .GetSSHKeyPath
I1002 19:59:09.347838   22817 main.go:141] libmachine: (functional-527118) Calling .GetSSHUsername
I1002 19:59:09.347986   22817 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/functional-527118/id_rsa Username:docker}
I1002 19:59:09.431312   22817 build_images.go:161] Building image from path: /tmp/build.3766724884.tar
I1002 19:59:09.431380   22817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 19:59:09.464812   22817 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3766724884.tar
I1002 19:59:09.474131   22817 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3766724884.tar: stat -c "%s %y" /var/lib/minikube/build/build.3766724884.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3766724884.tar': No such file or directory
I1002 19:59:09.474170   22817 ssh_runner.go:362] scp /tmp/build.3766724884.tar --> /var/lib/minikube/build/build.3766724884.tar (3072 bytes)
I1002 19:59:09.550116   22817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3766724884
I1002 19:59:09.576384   22817 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3766724884 -xf /var/lib/minikube/build/build.3766724884.tar
I1002 19:59:09.596219   22817 crio.go:315] Building image: /var/lib/minikube/build/build.3766724884
I1002 19:59:09.596296   22817 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-527118 /var/lib/minikube/build/build.3766724884 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1002 19:59:13.462928   22817 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-527118 /var/lib/minikube/build/build.3766724884 --cgroup-manager=cgroupfs: (3.866605644s)
I1002 19:59:13.462999   22817 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3766724884
I1002 19:59:13.488422   22817 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3766724884.tar
I1002 19:59:13.514507   22817 build_images.go:217] Built localhost/my-image:functional-527118 from /tmp/build.3766724884.tar
I1002 19:59:13.514555   22817 build_images.go:133] succeeded building to: functional-527118
I1002 19:59:13.514562   22817 build_images.go:134] failed building to: 
I1002 19:59:13.514593   22817 main.go:141] libmachine: Making call to close driver server
I1002 19:59:13.514605   22817 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:13.514916   22817 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:13.514934   22817 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:59:13.514943   22817 main.go:141] libmachine: Making call to close driver server
I1002 19:59:13.514950   22817 main.go:141] libmachine: (functional-527118) Calling .Close
I1002 19:59:13.514961   22817 main.go:141] libmachine: (functional-527118) DBG | Closing plugin on server side
I1002 19:59:13.515155   22817 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:59:13.515172   22817 main.go:141] libmachine: (functional-527118) DBG | Closing plugin on server side
I1002 19:59:13.515175   22817 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls
2025/10/02 19:59:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.954352934s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-527118
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image load --daemon kicbase/echo-server:functional-527118 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 image load --daemon kicbase/echo-server:functional-527118 --alsologtostderr: (1.130775247s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image load --daemon kicbase/echo-server:functional-527118 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-527118
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image load --daemon kicbase/echo-server:functional-527118 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image save kicbase/echo-server:functional-527118 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 image save kicbase/echo-server:functional-527118 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.568387496s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image rm kicbase/echo-server:functional-527118 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-527118
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 image save --daemon kicbase/echo-server:functional-527118 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-527118
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-527118 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-527118 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-wpqtz" [f4c7c08c-3b54-42f2-835b-053b34a72a4c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-wpqtz" [f4c7c08c-3b54-42f2-835b-053b34a72a4c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.007819032s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "405.713414ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "48.191651ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "303.10165ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "50.541181ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdany-port2899980439/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759435135500845605" to /tmp/TestFunctionalparallelMountCmdany-port2899980439/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759435135500845605" to /tmp/TestFunctionalparallelMountCmdany-port2899980439/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759435135500845605" to /tmp/TestFunctionalparallelMountCmdany-port2899980439/001/test-1759435135500845605
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.374836ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 19:58:55.742498   13449 retry.go:31] will retry after 431.784585ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 19:58 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 19:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 19:58 test-1759435135500845605
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh cat /mount-9p/test-1759435135500845605
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-527118 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9bedecdb-6817-4020-a684-645c0c74c968] Pending
helpers_test.go:352: "busybox-mount" [9bedecdb-6817-4020-a684-645c0c74c968] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9bedecdb-6817-4020-a684-645c0c74c968] Running
helpers_test.go:352: "busybox-mount" [9bedecdb-6817-4020-a684-645c0c74c968] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9bedecdb-6817-4020-a684-645c0c74c968] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.014418854s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-527118 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdany-port2899980439/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 service list: (1.28723648s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-527118 service list -o json: (1.311862522s)
functional_test.go:1504: Took "1.311954823s" to run "out/minikube-linux-amd64 -p functional-527118 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdspecific-port3679525490/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.688118ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 19:59:05.378965   13449 retry.go:31] will retry after 620.16528ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdspecific-port3679525490/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 ssh "sudo umount -f /mount-9p": exit status 1 (248.785285ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-527118 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdspecific-port3679525490/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.4:31059
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.4:31059
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1424936865/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1424936865/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1424936865/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T" /mount1: exit status 1 (309.355775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 19:59:07.420624   13449 retry.go:31] will retry after 348.867868ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-527118 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1424936865/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1424936865/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-527118 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1424936865/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-527118 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-527118
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-527118
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-527118
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:01:00.269664   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:01:27.972313   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m27.620235533s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (208.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 kubectl -- rollout status deployment/busybox: (5.152203659s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-n727m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-ndd25 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-pwrzs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-n727m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-ndd25 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-pwrzs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-n727m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-ndd25 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-pwrzs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-n727m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-n727m -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-ndd25 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-ndd25 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-pwrzs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 kubectl -- exec busybox-7b57f96db7-pwrzs -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 node add --alsologtostderr -v 5
E1002 20:03:31.373551   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:31.379997   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:31.391397   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:31.412896   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:31.454287   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:31.535746   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:31.697265   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:32.018936   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:32.661039   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:33.942948   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:36.504928   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:03:41.626408   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 node add --alsologtostderr -v 5: (48.431804387s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-483793 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp testdata/cp-test.txt ha-483793:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1813266996/001/cp-test_ha-483793.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793:/home/docker/cp-test.txt ha-483793-m02:/home/docker/cp-test_ha-483793_ha-483793-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m02 "sudo cat /home/docker/cp-test_ha-483793_ha-483793-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793:/home/docker/cp-test.txt ha-483793-m03:/home/docker/cp-test_ha-483793_ha-483793-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793 "sudo cat /home/docker/cp-test.txt"
E1002 20:03:51.867816   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m03 "sudo cat /home/docker/cp-test_ha-483793_ha-483793-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793:/home/docker/cp-test.txt ha-483793-m04:/home/docker/cp-test_ha-483793_ha-483793-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m04 "sudo cat /home/docker/cp-test_ha-483793_ha-483793-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp testdata/cp-test.txt ha-483793-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1813266996/001/cp-test_ha-483793-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m02:/home/docker/cp-test.txt ha-483793:/home/docker/cp-test_ha-483793-m02_ha-483793.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793 "sudo cat /home/docker/cp-test_ha-483793-m02_ha-483793.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m02:/home/docker/cp-test.txt ha-483793-m03:/home/docker/cp-test_ha-483793-m02_ha-483793-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m03 "sudo cat /home/docker/cp-test_ha-483793-m02_ha-483793-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m02:/home/docker/cp-test.txt ha-483793-m04:/home/docker/cp-test_ha-483793-m02_ha-483793-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m04 "sudo cat /home/docker/cp-test_ha-483793-m02_ha-483793-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp testdata/cp-test.txt ha-483793-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1813266996/001/cp-test_ha-483793-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m03:/home/docker/cp-test.txt ha-483793:/home/docker/cp-test_ha-483793-m03_ha-483793.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793 "sudo cat /home/docker/cp-test_ha-483793-m03_ha-483793.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m03:/home/docker/cp-test.txt ha-483793-m02:/home/docker/cp-test_ha-483793-m03_ha-483793-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m02 "sudo cat /home/docker/cp-test_ha-483793-m03_ha-483793-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m03:/home/docker/cp-test.txt ha-483793-m04:/home/docker/cp-test_ha-483793-m03_ha-483793-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m04 "sudo cat /home/docker/cp-test_ha-483793-m03_ha-483793-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp testdata/cp-test.txt ha-483793-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1813266996/001/cp-test_ha-483793-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m04:/home/docker/cp-test.txt ha-483793:/home/docker/cp-test_ha-483793-m04_ha-483793.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793 "sudo cat /home/docker/cp-test_ha-483793-m04_ha-483793.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m04:/home/docker/cp-test.txt ha-483793-m02:/home/docker/cp-test_ha-483793-m04_ha-483793-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m02 "sudo cat /home/docker/cp-test_ha-483793-m04_ha-483793-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 cp ha-483793-m04:/home/docker/cp-test.txt ha-483793-m03:/home/docker/cp-test_ha-483793-m04_ha-483793-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 ssh -n ha-483793-m03 "sudo cat /home/docker/cp-test_ha-483793-m04_ha-483793-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (84.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 node stop m02 --alsologtostderr -v 5
E1002 20:04:12.349759   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:04:53.312092   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 node stop m02 --alsologtostderr -v 5: (1m23.63114166s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5: exit status 7 (716.02402ms)

                                                
                                                
-- stdout --
	ha-483793
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-483793-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-483793-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-483793-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:05:26.207311   27548 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:05:26.207560   27548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:05:26.207570   27548 out.go:374] Setting ErrFile to fd 2...
	I1002 20:05:26.207574   27548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:05:26.207794   27548 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 20:05:26.208033   27548 out.go:368] Setting JSON to false
	I1002 20:05:26.208063   27548 mustload.go:65] Loading cluster: ha-483793
	I1002 20:05:26.208135   27548 notify.go:221] Checking for updates...
	I1002 20:05:26.208564   27548 config.go:182] Loaded profile config "ha-483793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:05:26.208587   27548 status.go:174] checking status of ha-483793 ...
	I1002 20:05:26.209215   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.209255   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.232102   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I1002 20:05:26.232655   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.233226   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.233255   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.233685   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.233976   27548 main.go:141] libmachine: (ha-483793) Calling .GetState
	I1002 20:05:26.236151   27548 status.go:371] ha-483793 host status = "Running" (err=<nil>)
	I1002 20:05:26.236167   27548 host.go:66] Checking if "ha-483793" exists ...
	I1002 20:05:26.236446   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.236481   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.250007   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I1002 20:05:26.250430   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.250858   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.250886   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.251228   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.251441   27548 main.go:141] libmachine: (ha-483793) Calling .GetIP
	I1002 20:05:26.254391   27548 main.go:141] libmachine: (ha-483793) DBG | domain ha-483793 has defined MAC address 52:54:00:5e:97:85 in network mk-ha-483793
	I1002 20:05:26.254915   27548 main.go:141] libmachine: (ha-483793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:97:85", ip: ""} in network mk-ha-483793: {Iface:virbr1 ExpiryTime:2025-10-02 20:59:37 +0000 UTC Type:0 Mac:52:54:00:5e:97:85 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-483793 Clientid:01:52:54:00:5e:97:85}
	I1002 20:05:26.254967   27548 main.go:141] libmachine: (ha-483793) DBG | domain ha-483793 has defined IP address 192.168.39.228 and MAC address 52:54:00:5e:97:85 in network mk-ha-483793
	I1002 20:05:26.255100   27548 host.go:66] Checking if "ha-483793" exists ...
	I1002 20:05:26.255506   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.255557   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.268820   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I1002 20:05:26.269284   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.269744   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.269776   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.270125   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.270355   27548 main.go:141] libmachine: (ha-483793) Calling .DriverName
	I1002 20:05:26.270549   27548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:05:26.270580   27548 main.go:141] libmachine: (ha-483793) Calling .GetSSHHostname
	I1002 20:05:26.273683   27548 main.go:141] libmachine: (ha-483793) DBG | domain ha-483793 has defined MAC address 52:54:00:5e:97:85 in network mk-ha-483793
	I1002 20:05:26.274179   27548 main.go:141] libmachine: (ha-483793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:97:85", ip: ""} in network mk-ha-483793: {Iface:virbr1 ExpiryTime:2025-10-02 20:59:37 +0000 UTC Type:0 Mac:52:54:00:5e:97:85 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-483793 Clientid:01:52:54:00:5e:97:85}
	I1002 20:05:26.274215   27548 main.go:141] libmachine: (ha-483793) DBG | domain ha-483793 has defined IP address 192.168.39.228 and MAC address 52:54:00:5e:97:85 in network mk-ha-483793
	I1002 20:05:26.274371   27548 main.go:141] libmachine: (ha-483793) Calling .GetSSHPort
	I1002 20:05:26.274532   27548 main.go:141] libmachine: (ha-483793) Calling .GetSSHKeyPath
	I1002 20:05:26.274662   27548 main.go:141] libmachine: (ha-483793) Calling .GetSSHUsername
	I1002 20:05:26.274796   27548 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/ha-483793/id_rsa Username:docker}
	I1002 20:05:26.368733   27548 ssh_runner.go:195] Run: systemctl --version
	I1002 20:05:26.377251   27548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:05:26.397693   27548 kubeconfig.go:125] found "ha-483793" server: "https://192.168.39.254:8443"
	I1002 20:05:26.397753   27548 api_server.go:166] Checking apiserver status ...
	I1002 20:05:26.397808   27548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:05:26.426941   27548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	W1002 20:05:26.443477   27548 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:05:26.443532   27548 ssh_runner.go:195] Run: ls
	I1002 20:05:26.451025   27548 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1002 20:05:26.456542   27548 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1002 20:05:26.456579   27548 status.go:463] ha-483793 apiserver status = Running (err=<nil>)
	I1002 20:05:26.456592   27548 status.go:176] ha-483793 status: &{Name:ha-483793 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:05:26.456609   27548 status.go:174] checking status of ha-483793-m02 ...
	I1002 20:05:26.456964   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.457013   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.471801   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I1002 20:05:26.472296   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.472789   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.472812   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.473154   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.473350   27548 main.go:141] libmachine: (ha-483793-m02) Calling .GetState
	I1002 20:05:26.475222   27548 status.go:371] ha-483793-m02 host status = "Stopped" (err=<nil>)
	I1002 20:05:26.475244   27548 status.go:384] host is not running, skipping remaining checks
	I1002 20:05:26.475251   27548 status.go:176] ha-483793-m02 status: &{Name:ha-483793-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:05:26.475267   27548 status.go:174] checking status of ha-483793-m03 ...
	I1002 20:05:26.475575   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.475615   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.489420   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I1002 20:05:26.489892   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.490351   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.490375   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.490710   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.491058   27548 main.go:141] libmachine: (ha-483793-m03) Calling .GetState
	I1002 20:05:26.492974   27548 status.go:371] ha-483793-m03 host status = "Running" (err=<nil>)
	I1002 20:05:26.492990   27548 host.go:66] Checking if "ha-483793-m03" exists ...
	I1002 20:05:26.493301   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.493340   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.507302   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38039
	I1002 20:05:26.507820   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.508299   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.508323   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.508648   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.508873   27548 main.go:141] libmachine: (ha-483793-m03) Calling .GetIP
	I1002 20:05:26.512320   27548 main.go:141] libmachine: (ha-483793-m03) DBG | domain ha-483793-m03 has defined MAC address 52:54:00:7e:ec:c1 in network mk-ha-483793
	I1002 20:05:26.512801   27548 main.go:141] libmachine: (ha-483793-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:ec:c1", ip: ""} in network mk-ha-483793: {Iface:virbr1 ExpiryTime:2025-10-02 21:01:40 +0000 UTC Type:0 Mac:52:54:00:7e:ec:c1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:ha-483793-m03 Clientid:01:52:54:00:7e:ec:c1}
	I1002 20:05:26.512840   27548 main.go:141] libmachine: (ha-483793-m03) DBG | domain ha-483793-m03 has defined IP address 192.168.39.199 and MAC address 52:54:00:7e:ec:c1 in network mk-ha-483793
	I1002 20:05:26.513032   27548 host.go:66] Checking if "ha-483793-m03" exists ...
	I1002 20:05:26.513361   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.513413   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.527424   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46475
	I1002 20:05:26.528103   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.528618   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.528638   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.528980   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.529212   27548 main.go:141] libmachine: (ha-483793-m03) Calling .DriverName
	I1002 20:05:26.529479   27548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:05:26.529498   27548 main.go:141] libmachine: (ha-483793-m03) Calling .GetSSHHostname
	I1002 20:05:26.532950   27548 main.go:141] libmachine: (ha-483793-m03) DBG | domain ha-483793-m03 has defined MAC address 52:54:00:7e:ec:c1 in network mk-ha-483793
	I1002 20:05:26.533595   27548 main.go:141] libmachine: (ha-483793-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:ec:c1", ip: ""} in network mk-ha-483793: {Iface:virbr1 ExpiryTime:2025-10-02 21:01:40 +0000 UTC Type:0 Mac:52:54:00:7e:ec:c1 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:ha-483793-m03 Clientid:01:52:54:00:7e:ec:c1}
	I1002 20:05:26.533616   27548 main.go:141] libmachine: (ha-483793-m03) DBG | domain ha-483793-m03 has defined IP address 192.168.39.199 and MAC address 52:54:00:7e:ec:c1 in network mk-ha-483793
	I1002 20:05:26.533845   27548 main.go:141] libmachine: (ha-483793-m03) Calling .GetSSHPort
	I1002 20:05:26.534049   27548 main.go:141] libmachine: (ha-483793-m03) Calling .GetSSHKeyPath
	I1002 20:05:26.534198   27548 main.go:141] libmachine: (ha-483793-m03) Calling .GetSSHUsername
	I1002 20:05:26.534334   27548 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/ha-483793-m03/id_rsa Username:docker}
	I1002 20:05:26.628084   27548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:05:26.651249   27548 kubeconfig.go:125] found "ha-483793" server: "https://192.168.39.254:8443"
	I1002 20:05:26.651294   27548 api_server.go:166] Checking apiserver status ...
	I1002 20:05:26.651349   27548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:05:26.679016   27548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1760/cgroup
	W1002 20:05:26.692745   27548 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1760/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:05:26.692819   27548 ssh_runner.go:195] Run: ls
	I1002 20:05:26.701096   27548 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1002 20:05:26.707135   27548 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1002 20:05:26.707170   27548 status.go:463] ha-483793-m03 apiserver status = Running (err=<nil>)
	I1002 20:05:26.707183   27548 status.go:176] ha-483793-m03 status: &{Name:ha-483793-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:05:26.707208   27548 status.go:174] checking status of ha-483793-m04 ...
	I1002 20:05:26.707669   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.707743   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.721933   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33299
	I1002 20:05:26.722382   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.722856   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.722892   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.723259   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.723476   27548 main.go:141] libmachine: (ha-483793-m04) Calling .GetState
	I1002 20:05:26.725400   27548 status.go:371] ha-483793-m04 host status = "Running" (err=<nil>)
	I1002 20:05:26.725415   27548 host.go:66] Checking if "ha-483793-m04" exists ...
	I1002 20:05:26.725686   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.725731   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.739887   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I1002 20:05:26.740347   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.740751   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.740772   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.741097   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.741285   27548 main.go:141] libmachine: (ha-483793-m04) Calling .GetIP
	I1002 20:05:26.744286   27548 main.go:141] libmachine: (ha-483793-m04) DBG | domain ha-483793-m04 has defined MAC address 52:54:00:db:63:17 in network mk-ha-483793
	I1002 20:05:26.744760   27548 main.go:141] libmachine: (ha-483793-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:63:17", ip: ""} in network mk-ha-483793: {Iface:virbr1 ExpiryTime:2025-10-02 21:03:16 +0000 UTC Type:0 Mac:52:54:00:db:63:17 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-483793-m04 Clientid:01:52:54:00:db:63:17}
	I1002 20:05:26.744790   27548 main.go:141] libmachine: (ha-483793-m04) DBG | domain ha-483793-m04 has defined IP address 192.168.39.231 and MAC address 52:54:00:db:63:17 in network mk-ha-483793
	I1002 20:05:26.744940   27548 host.go:66] Checking if "ha-483793-m04" exists ...
	I1002 20:05:26.745232   27548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:05:26.745269   27548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:05:26.758493   27548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I1002 20:05:26.758890   27548 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:05:26.759381   27548 main.go:141] libmachine: Using API Version  1
	I1002 20:05:26.759399   27548 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:05:26.759701   27548 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:05:26.759913   27548 main.go:141] libmachine: (ha-483793-m04) Calling .DriverName
	I1002 20:05:26.760244   27548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:05:26.760278   27548 main.go:141] libmachine: (ha-483793-m04) Calling .GetSSHHostname
	I1002 20:05:26.763586   27548 main.go:141] libmachine: (ha-483793-m04) DBG | domain ha-483793-m04 has defined MAC address 52:54:00:db:63:17 in network mk-ha-483793
	I1002 20:05:26.764011   27548 main.go:141] libmachine: (ha-483793-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:63:17", ip: ""} in network mk-ha-483793: {Iface:virbr1 ExpiryTime:2025-10-02 21:03:16 +0000 UTC Type:0 Mac:52:54:00:db:63:17 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-483793-m04 Clientid:01:52:54:00:db:63:17}
	I1002 20:05:26.764038   27548 main.go:141] libmachine: (ha-483793-m04) DBG | domain ha-483793-m04 has defined IP address 192.168.39.231 and MAC address 52:54:00:db:63:17 in network mk-ha-483793
	I1002 20:05:26.764245   27548 main.go:141] libmachine: (ha-483793-m04) Calling .GetSSHPort
	I1002 20:05:26.764479   27548 main.go:141] libmachine: (ha-483793-m04) Calling .GetSSHKeyPath
	I1002 20:05:26.764625   27548 main.go:141] libmachine: (ha-483793-m04) Calling .GetSSHUsername
	I1002 20:05:26.764795   27548 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/ha-483793-m04/id_rsa Username:docker}
	I1002 20:05:26.851592   27548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:05:26.872849   27548 status.go:176] ha-483793-m04 status: &{Name:ha-483793-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (84.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 node start m02 --alsologtostderr -v 5
E1002 20:06:00.270063   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 node start m02 --alsologtostderr -v 5: (34.387339961s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5: (1.251443691s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.313025713s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 stop --alsologtostderr -v 5
E1002 20:06:15.233986   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:08:31.373240   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:08:59.075828   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 stop --alsologtostderr -v 5: (4m17.862180958s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 start --wait true --alsologtostderr -v 5
E1002 20:11:00.276944   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:12:23.334035   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 start --wait true --alsologtostderr -v 5: (2m5.704642406s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 node delete m03 --alsologtostderr -v 5: (18.179985765s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (238.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 stop --alsologtostderr -v 5
E1002 20:13:31.373580   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:16:00.271474   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 stop --alsologtostderr -v 5: (3m58.767970151s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5: exit status 7 (106.483477ms)

                                                
                                                
-- stdout --
	ha-483793
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-483793-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-483793-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:16:46.819918   31491 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:16:46.820173   31491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:16:46.820183   31491 out.go:374] Setting ErrFile to fd 2...
	I1002 20:16:46.820188   31491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:16:46.820407   31491 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 20:16:46.820574   31491 out.go:368] Setting JSON to false
	I1002 20:16:46.820597   31491 mustload.go:65] Loading cluster: ha-483793
	I1002 20:16:46.820763   31491 notify.go:221] Checking for updates...
	I1002 20:16:46.821013   31491 config.go:182] Loaded profile config "ha-483793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:16:46.821030   31491 status.go:174] checking status of ha-483793 ...
	I1002 20:16:46.821444   31491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:16:46.821479   31491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:16:46.843651   31491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35219
	I1002 20:16:46.844161   31491 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:16:46.844694   31491 main.go:141] libmachine: Using API Version  1
	I1002 20:16:46.844765   31491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:16:46.845115   31491 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:16:46.845305   31491 main.go:141] libmachine: (ha-483793) Calling .GetState
	I1002 20:16:46.847130   31491 status.go:371] ha-483793 host status = "Stopped" (err=<nil>)
	I1002 20:16:46.847147   31491 status.go:384] host is not running, skipping remaining checks
	I1002 20:16:46.847153   31491 status.go:176] ha-483793 status: &{Name:ha-483793 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:16:46.847169   31491 status.go:174] checking status of ha-483793-m02 ...
	I1002 20:16:46.847456   31491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:16:46.847490   31491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:16:46.860288   31491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I1002 20:16:46.860690   31491 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:16:46.861121   31491 main.go:141] libmachine: Using API Version  1
	I1002 20:16:46.861150   31491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:16:46.861428   31491 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:16:46.861573   31491 main.go:141] libmachine: (ha-483793-m02) Calling .GetState
	I1002 20:16:46.863324   31491 status.go:371] ha-483793-m02 host status = "Stopped" (err=<nil>)
	I1002 20:16:46.863339   31491 status.go:384] host is not running, skipping remaining checks
	I1002 20:16:46.863346   31491 status.go:176] ha-483793-m02 status: &{Name:ha-483793-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:16:46.863376   31491 status.go:174] checking status of ha-483793-m04 ...
	I1002 20:16:46.863656   31491 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:16:46.863694   31491 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:16:46.876348   31491 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I1002 20:16:46.876788   31491 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:16:46.877252   31491 main.go:141] libmachine: Using API Version  1
	I1002 20:16:46.877282   31491 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:16:46.877652   31491 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:16:46.877827   31491 main.go:141] libmachine: (ha-483793-m04) Calling .GetState
	I1002 20:16:46.879256   31491 status.go:371] ha-483793-m04 host status = "Stopped" (err=<nil>)
	I1002 20:16:46.879269   31491 status.go:384] host is not running, skipping remaining checks
	I1002 20:16:46.879276   31491 status.go:176] ha-483793-m04 status: &{Name:ha-483793-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (238.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m39.640850049s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 node add --control-plane --alsologtostderr -v 5
E1002 20:18:31.373160   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-483793 node add --control-plane --alsologtostderr -v 5: (1m20.897460097s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-483793 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-246022 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:19:54.437818   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:21:00.277669   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-246022 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.83644587s)
--- PASS: TestJSONOutput/start/Command (80.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-246022 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-246022 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-246022 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-246022 --output=json --user=testUser: (7.095669826s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-854997 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-854997 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (61.250227ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f392952e-f61e-4c18-a3b2-c54b81ef5a3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-854997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b033c0c0-4d31-4d05-b2f8-1f0ccdfceacc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"cb91a998-f1c9-44dc-b085-c48b3e477cda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"90a754ac-723a-4347-9f69-72a75919ef91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig"}}
	{"specversion":"1.0","id":"73f89c04-e690-469b-8003-fe96500d5b3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube"}}
	{"specversion":"1.0","id":"d41d30f0-9ab4-4570-bc8e-81b2eced3d57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0381f9d3-73f0-41f8-bc3a-1dadd1f1e283","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"77c3ab45-2dea-4e3d-aff5-e0e0178e5ede","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-854997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-854997
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (85.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-488531 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-488531 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.152769599s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-501668 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-501668 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.313936147s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-488531
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-501668
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-501668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-501668
helpers_test.go:175: Cleaning up "first-488531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-488531
--- PASS: TestMinikubeProfile (85.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-697804 --memory=3072 --mount-string /tmp/TestMountStartserial74651854/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-697804 --memory=3072 --mount-string /tmp/TestMountStartserial74651854/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.336726743s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-697804 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-697804 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-712711 --memory=3072 --mount-string /tmp/TestMountStartserial74651854/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:23:31.373174   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-712711 --memory=3072 --mount-string /tmp/TestMountStartserial74651854/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.778706282s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712711 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712711 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-697804 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712711 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712711 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-712711
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-712711: (1.314191604s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-712711
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-712711: (19.347125076s)
--- PASS: TestMountStart/serial/RestartStopped (20.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712711 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-712711 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-273324 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:26:00.270166   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-273324 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m11.320069102s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-273324 -- rollout status deployment/busybox: (4.50395747s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-tt9rz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-v7tp4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-tt9rz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-v7tp4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-tt9rz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-v7tp4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.98s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-tt9rz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-tt9rz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-v7tp4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-273324 -- exec busybox-7b57f96db7-v7tp4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-273324 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-273324 -v=5 --alsologtostderr: (45.174766103s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.81s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-273324 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp testdata/cp-test.txt multinode-273324:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp multinode-273324:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2500355409/001/cp-test_multinode-273324.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp multinode-273324:/home/docker/cp-test.txt multinode-273324-m02:/home/docker/cp-test_multinode-273324_multinode-273324-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m02 "sudo cat /home/docker/cp-test_multinode-273324_multinode-273324-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp multinode-273324:/home/docker/cp-test.txt multinode-273324-m03:/home/docker/cp-test_multinode-273324_multinode-273324-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m03 "sudo cat /home/docker/cp-test_multinode-273324_multinode-273324-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp testdata/cp-test.txt multinode-273324-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp multinode-273324-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2500355409/001/cp-test_multinode-273324-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp multinode-273324-m02:/home/docker/cp-test.txt multinode-273324:/home/docker/cp-test_multinode-273324-m02_multinode-273324.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324 "sudo cat /home/docker/cp-test_multinode-273324-m02_multinode-273324.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp multinode-273324-m02:/home/docker/cp-test.txt multinode-273324-m03:/home/docker/cp-test_multinode-273324-m02_multinode-273324-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m03 "sudo cat /home/docker/cp-test_multinode-273324-m02_multinode-273324-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp testdata/cp-test.txt multinode-273324-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp multinode-273324-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2500355409/001/cp-test_multinode-273324-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp multinode-273324-m03:/home/docker/cp-test.txt multinode-273324:/home/docker/cp-test_multinode-273324-m03_multinode-273324.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324 "sudo cat /home/docker/cp-test_multinode-273324-m03_multinode-273324.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 cp multinode-273324-m03:/home/docker/cp-test.txt multinode-273324-m02:/home/docker/cp-test_multinode-273324-m03_multinode-273324-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 ssh -n multinode-273324-m02 "sudo cat /home/docker/cp-test_multinode-273324-m03_multinode-273324-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-273324 node stop m03: (1.590560385s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-273324 status: exit status 7 (437.744281ms)

                                                
                                                
-- stdout --
	multinode-273324
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-273324-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-273324-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-273324 status --alsologtostderr: exit status 7 (438.103017ms)

                                                
                                                
-- stdout --
	multinode-273324
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-273324-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-273324-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:27:15.826668   39744 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:27:15.826975   39744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:15.826986   39744 out.go:374] Setting ErrFile to fd 2...
	I1002 20:27:15.826993   39744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:27:15.827215   39744 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 20:27:15.827397   39744 out.go:368] Setting JSON to false
	I1002 20:27:15.827423   39744 mustload.go:65] Loading cluster: multinode-273324
	I1002 20:27:15.827530   39744 notify.go:221] Checking for updates...
	I1002 20:27:15.827874   39744 config.go:182] Loaded profile config "multinode-273324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:27:15.827890   39744 status.go:174] checking status of multinode-273324 ...
	I1002 20:27:15.828398   39744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:27:15.828435   39744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:27:15.843096   39744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33195
	I1002 20:27:15.843563   39744 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:27:15.844110   39744 main.go:141] libmachine: Using API Version  1
	I1002 20:27:15.844127   39744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:27:15.844534   39744 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:27:15.844773   39744 main.go:141] libmachine: (multinode-273324) Calling .GetState
	I1002 20:27:15.846561   39744 status.go:371] multinode-273324 host status = "Running" (err=<nil>)
	I1002 20:27:15.846577   39744 host.go:66] Checking if "multinode-273324" exists ...
	I1002 20:27:15.846923   39744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:27:15.846975   39744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:27:15.861902   39744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I1002 20:27:15.862373   39744 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:27:15.862846   39744 main.go:141] libmachine: Using API Version  1
	I1002 20:27:15.862876   39744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:27:15.863190   39744 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:27:15.863360   39744 main.go:141] libmachine: (multinode-273324) Calling .GetIP
	I1002 20:27:15.866365   39744 main.go:141] libmachine: (multinode-273324) DBG | domain multinode-273324 has defined MAC address 52:54:00:0c:05:29 in network mk-multinode-273324
	I1002 20:27:15.866888   39744 main.go:141] libmachine: (multinode-273324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:05:29", ip: ""} in network mk-multinode-273324: {Iface:virbr1 ExpiryTime:2025-10-02 21:24:17 +0000 UTC Type:0 Mac:52:54:00:0c:05:29 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-273324 Clientid:01:52:54:00:0c:05:29}
	I1002 20:27:15.866918   39744 main.go:141] libmachine: (multinode-273324) DBG | domain multinode-273324 has defined IP address 192.168.39.68 and MAC address 52:54:00:0c:05:29 in network mk-multinode-273324
	I1002 20:27:15.867077   39744 host.go:66] Checking if "multinode-273324" exists ...
	I1002 20:27:15.867418   39744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:27:15.867457   39744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:27:15.881328   39744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I1002 20:27:15.881767   39744 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:27:15.882176   39744 main.go:141] libmachine: Using API Version  1
	I1002 20:27:15.882199   39744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:27:15.882642   39744 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:27:15.882840   39744 main.go:141] libmachine: (multinode-273324) Calling .DriverName
	I1002 20:27:15.883035   39744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:27:15.883056   39744 main.go:141] libmachine: (multinode-273324) Calling .GetSSHHostname
	I1002 20:27:15.885922   39744 main.go:141] libmachine: (multinode-273324) DBG | domain multinode-273324 has defined MAC address 52:54:00:0c:05:29 in network mk-multinode-273324
	I1002 20:27:15.886349   39744 main.go:141] libmachine: (multinode-273324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:05:29", ip: ""} in network mk-multinode-273324: {Iface:virbr1 ExpiryTime:2025-10-02 21:24:17 +0000 UTC Type:0 Mac:52:54:00:0c:05:29 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-273324 Clientid:01:52:54:00:0c:05:29}
	I1002 20:27:15.886369   39744 main.go:141] libmachine: (multinode-273324) DBG | domain multinode-273324 has defined IP address 192.168.39.68 and MAC address 52:54:00:0c:05:29 in network mk-multinode-273324
	I1002 20:27:15.886527   39744 main.go:141] libmachine: (multinode-273324) Calling .GetSSHPort
	I1002 20:27:15.886774   39744 main.go:141] libmachine: (multinode-273324) Calling .GetSSHKeyPath
	I1002 20:27:15.886931   39744 main.go:141] libmachine: (multinode-273324) Calling .GetSSHUsername
	I1002 20:27:15.887087   39744 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/multinode-273324/id_rsa Username:docker}
	I1002 20:27:15.968411   39744 ssh_runner.go:195] Run: systemctl --version
	I1002 20:27:15.975191   39744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:27:15.993437   39744 kubeconfig.go:125] found "multinode-273324" server: "https://192.168.39.68:8443"
	I1002 20:27:15.993488   39744 api_server.go:166] Checking apiserver status ...
	I1002 20:27:15.993543   39744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:27:16.014491   39744 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup
	W1002 20:27:16.027052   39744 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:27:16.027104   39744 ssh_runner.go:195] Run: ls
	I1002 20:27:16.032399   39744 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I1002 20:27:16.039731   39744 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I1002 20:27:16.039752   39744 status.go:463] multinode-273324 apiserver status = Running (err=<nil>)
	I1002 20:27:16.039761   39744 status.go:176] multinode-273324 status: &{Name:multinode-273324 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:27:16.039778   39744 status.go:174] checking status of multinode-273324-m02 ...
	I1002 20:27:16.040161   39744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:27:16.040215   39744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:27:16.053844   39744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
	I1002 20:27:16.054395   39744 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:27:16.054866   39744 main.go:141] libmachine: Using API Version  1
	I1002 20:27:16.054886   39744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:27:16.055223   39744 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:27:16.055397   39744 main.go:141] libmachine: (multinode-273324-m02) Calling .GetState
	I1002 20:27:16.057085   39744 status.go:371] multinode-273324-m02 host status = "Running" (err=<nil>)
	I1002 20:27:16.057103   39744 host.go:66] Checking if "multinode-273324-m02" exists ...
	I1002 20:27:16.057387   39744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:27:16.057422   39744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:27:16.070915   39744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32829
	I1002 20:27:16.071420   39744 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:27:16.071911   39744 main.go:141] libmachine: Using API Version  1
	I1002 20:27:16.071936   39744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:27:16.072274   39744 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:27:16.072459   39744 main.go:141] libmachine: (multinode-273324-m02) Calling .GetIP
	I1002 20:27:16.075238   39744 main.go:141] libmachine: (multinode-273324-m02) DBG | domain multinode-273324-m02 has defined MAC address 52:54:00:dd:23:f4 in network mk-multinode-273324
	I1002 20:27:16.075634   39744 main.go:141] libmachine: (multinode-273324-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:23:f4", ip: ""} in network mk-multinode-273324: {Iface:virbr1 ExpiryTime:2025-10-02 21:25:43 +0000 UTC Type:0 Mac:52:54:00:dd:23:f4 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-273324-m02 Clientid:01:52:54:00:dd:23:f4}
	I1002 20:27:16.075668   39744 main.go:141] libmachine: (multinode-273324-m02) DBG | domain multinode-273324-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:dd:23:f4 in network mk-multinode-273324
	I1002 20:27:16.075836   39744 host.go:66] Checking if "multinode-273324-m02" exists ...
	I1002 20:27:16.076139   39744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:27:16.076183   39744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:27:16.089760   39744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I1002 20:27:16.090226   39744 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:27:16.090712   39744 main.go:141] libmachine: Using API Version  1
	I1002 20:27:16.090758   39744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:27:16.091058   39744 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:27:16.091239   39744 main.go:141] libmachine: (multinode-273324-m02) Calling .DriverName
	I1002 20:27:16.091410   39744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:27:16.091428   39744 main.go:141] libmachine: (multinode-273324-m02) Calling .GetSSHHostname
	I1002 20:27:16.094577   39744 main.go:141] libmachine: (multinode-273324-m02) DBG | domain multinode-273324-m02 has defined MAC address 52:54:00:dd:23:f4 in network mk-multinode-273324
	I1002 20:27:16.095068   39744 main.go:141] libmachine: (multinode-273324-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:23:f4", ip: ""} in network mk-multinode-273324: {Iface:virbr1 ExpiryTime:2025-10-02 21:25:43 +0000 UTC Type:0 Mac:52:54:00:dd:23:f4 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-273324-m02 Clientid:01:52:54:00:dd:23:f4}
	I1002 20:27:16.095093   39744 main.go:141] libmachine: (multinode-273324-m02) DBG | domain multinode-273324-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:dd:23:f4 in network mk-multinode-273324
	I1002 20:27:16.095257   39744 main.go:141] libmachine: (multinode-273324-m02) Calling .GetSSHPort
	I1002 20:27:16.095415   39744 main.go:141] libmachine: (multinode-273324-m02) Calling .GetSSHKeyPath
	I1002 20:27:16.095551   39744 main.go:141] libmachine: (multinode-273324-m02) Calling .GetSSHUsername
	I1002 20:27:16.095690   39744 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-9524/.minikube/machines/multinode-273324-m02/id_rsa Username:docker}
	I1002 20:27:16.179620   39744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:27:16.198165   39744 status.go:176] multinode-273324-m02 status: &{Name:multinode-273324-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:27:16.198228   39744 status.go:174] checking status of multinode-273324-m03 ...
	I1002 20:27:16.198582   39744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:27:16.198628   39744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:27:16.212392   39744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38517
	I1002 20:27:16.212852   39744 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:27:16.213303   39744 main.go:141] libmachine: Using API Version  1
	I1002 20:27:16.213319   39744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:27:16.213638   39744 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:27:16.213870   39744 main.go:141] libmachine: (multinode-273324-m03) Calling .GetState
	I1002 20:27:16.215606   39744 status.go:371] multinode-273324-m03 host status = "Stopped" (err=<nil>)
	I1002 20:27:16.215624   39744 status.go:384] host is not running, skipping remaining checks
	I1002 20:27:16.215630   39744 status.go:176] multinode-273324-m03 status: &{Name:multinode-273324-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-273324 node start m03 -v=5 --alsologtostderr: (38.711304059s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (303.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-273324
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-273324
E1002 20:28:31.373366   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:29:03.337235   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-273324: (2m51.743917531s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-273324 --wait=true -v=5 --alsologtostderr
E1002 20:31:00.270042   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-273324 --wait=true -v=5 --alsologtostderr: (2m11.69590402s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-273324
--- PASS: TestMultiNode/serial/RestartKeepsNodes (303.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-273324 node delete m03: (2.35202167s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (173.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 stop
E1002 20:33:31.374141   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-273324 stop: (2m53.286229896s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-273324 status: exit status 7 (90.27595ms)

                                                
                                                
-- stdout --
	multinode-273324
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-273324-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-273324 status --alsologtostderr: exit status 7 (83.766437ms)

                                                
                                                
-- stdout --
	multinode-273324
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-273324-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:35:55.480548   42529 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:35:55.480826   42529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:35:55.480863   42529 out.go:374] Setting ErrFile to fd 2...
	I1002 20:35:55.480870   42529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:35:55.481327   42529 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 20:35:55.481640   42529 out.go:368] Setting JSON to false
	I1002 20:35:55.481752   42529 mustload.go:65] Loading cluster: multinode-273324
	I1002 20:35:55.481822   42529 notify.go:221] Checking for updates...
	I1002 20:35:55.482499   42529 config.go:182] Loaded profile config "multinode-273324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:55.482518   42529 status.go:174] checking status of multinode-273324 ...
	I1002 20:35:55.483044   42529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:35:55.483083   42529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:35:55.497394   42529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I1002 20:35:55.497851   42529 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:35:55.498490   42529 main.go:141] libmachine: Using API Version  1
	I1002 20:35:55.498520   42529 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:35:55.498932   42529 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:35:55.499145   42529 main.go:141] libmachine: (multinode-273324) Calling .GetState
	I1002 20:35:55.500924   42529 status.go:371] multinode-273324 host status = "Stopped" (err=<nil>)
	I1002 20:35:55.500959   42529 status.go:384] host is not running, skipping remaining checks
	I1002 20:35:55.500969   42529 status.go:176] multinode-273324 status: &{Name:multinode-273324 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 20:35:55.500999   42529 status.go:174] checking status of multinode-273324-m02 ...
	I1002 20:35:55.501301   42529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1002 20:35:55.501346   42529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:35:55.514739   42529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I1002 20:35:55.515121   42529 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:35:55.515550   42529 main.go:141] libmachine: Using API Version  1
	I1002 20:35:55.515572   42529 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:35:55.515896   42529 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:35:55.516097   42529 main.go:141] libmachine: (multinode-273324-m02) Calling .GetState
	I1002 20:35:55.517733   42529 status.go:371] multinode-273324-m02 host status = "Stopped" (err=<nil>)
	I1002 20:35:55.517750   42529 status.go:384] host is not running, skipping remaining checks
	I1002 20:35:55.517758   42529 status.go:176] multinode-273324-m02 status: &{Name:multinode-273324-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (173.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (119.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-273324 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:36:00.271540   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:34.440441   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-273324 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m59.303218781s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-273324 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (119.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-273324
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-273324-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-273324-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (71.368228ms)

                                                
                                                
-- stdout --
	* [multinode-273324-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-273324-m02' is duplicated with machine name 'multinode-273324-m02' in profile 'multinode-273324'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-273324-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:38:31.373864   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-273324-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.518729609s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-273324
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-273324: exit status 80 (225.185142ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-273324 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-273324-m03 already exists in multinode-273324-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-273324-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.69s)

                                                
                                    
x
+
TestScheduledStopUnix (113.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-605036 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-605036 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.777277978s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605036 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-605036 -n scheduled-stop-605036
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605036 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 20:42:14.270652   13449 retry.go:31] will retry after 129.832µs: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.271781   13449 retry.go:31] will retry after 115.227µs: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.272921   13449 retry.go:31] will retry after 190.222µs: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.274017   13449 retry.go:31] will retry after 169.618µs: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.275151   13449 retry.go:31] will retry after 686.413µs: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.276296   13449 retry.go:31] will retry after 875.429µs: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.277462   13449 retry.go:31] will retry after 1.287889ms: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.279683   13449 retry.go:31] will retry after 1.805172ms: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.281876   13449 retry.go:31] will retry after 3.216664ms: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.286112   13449 retry.go:31] will retry after 3.838822ms: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.290318   13449 retry.go:31] will retry after 4.270197ms: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.295530   13449 retry.go:31] will retry after 12.852889ms: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.308781   13449 retry.go:31] will retry after 12.444024ms: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.322063   13449 retry.go:31] will retry after 25.161268ms: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
I1002 20:42:14.348351   13449 retry.go:31] will retry after 16.157861ms: open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/scheduled-stop-605036/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605036 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-605036 -n scheduled-stop-605036
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-605036
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-605036 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-605036
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-605036: exit status 7 (63.805925ms)

                                                
                                                
-- stdout --
	scheduled-stop-605036
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-605036 -n scheduled-stop-605036
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-605036 -n scheduled-stop-605036: exit status 7 (64.990861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-605036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-605036
--- PASS: TestScheduledStopUnix (113.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (164.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2343311637 start -p running-upgrade-571399 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:43:31.373463   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2343311637 start -p running-upgrade-571399 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m46.040152113s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-571399 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-571399 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.743043011s)
helpers_test.go:175: Cleaning up "running-upgrade-571399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-571399
--- PASS: TestRunningBinaryUpgrade (164.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (183.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:46:00.269718   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m0.768831544s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-787090
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-787090: (2.438559527s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-787090 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-787090 status --format={{.Host}}: exit status 7 (77.253047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m10.450847152s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-787090 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (89.159084ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-787090] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-787090
	    minikube start -p kubernetes-upgrade-787090 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7870902 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-787090 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-787090 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (48.649374938s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-787090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-787090
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-787090: (1.202062165s)
--- PASS: TestKubernetesUpgrade (183.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-555034 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-555034 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (79.236906ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-555034] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestPause/serial/Start (108.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-762562 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-762562 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m48.42713306s)
--- PASS: TestPause/serial/Start (108.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (86.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-555034 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-555034 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.289536454s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-555034 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (86.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-555034 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-555034 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (6.766610129s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-555034 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-555034 status -o json: exit status 2 (255.341419ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-555034","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-555034
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-555034 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-555034 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.599946931s)
--- PASS: TestNoKubernetes/serial/Start (40.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-555034 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-555034 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.329448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (2.35796122s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E1002 20:45:43.338673   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNoKubernetes/serial/ProfileList (3.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-555034
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-555034: (1.543206765s)
--- PASS: TestNoKubernetes/serial/Stop (1.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-446943 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-446943 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (127.684538ms)

                                                
                                                
-- stdout --
	* [false-446943] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:45.017296   49121 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:45.017631   49121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:45.017645   49121 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:45.017652   49121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:45.018027   49121 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9524/.minikube/bin
	I1002 20:45:45.018843   49121 out.go:368] Setting JSON to false
	I1002 20:45:45.020198   49121 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5288,"bootTime":1759432657,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:45:45.020343   49121 start.go:140] virtualization: kvm guest
	I1002 20:45:45.023146   49121 out.go:179] * [false-446943] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:45:45.024765   49121 notify.go:221] Checking for updates...
	I1002 20:45:45.025417   49121 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:45:45.026999   49121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:45:45.029087   49121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9524/kubeconfig
	I1002 20:45:45.030294   49121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9524/.minikube
	I1002 20:45:45.031601   49121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:45:45.033186   49121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:45:45.034955   49121 config.go:182] Loaded profile config "NoKubernetes-555034": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1002 20:45:45.035192   49121 config.go:182] Loaded profile config "pause-762562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:45.035307   49121 config.go:182] Loaded profile config "running-upgrade-571399": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1002 20:45:45.035472   49121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:45:45.077810   49121 out.go:179] * Using the kvm2 driver based on user configuration
	I1002 20:45:45.078996   49121 start.go:306] selected driver: kvm2
	I1002 20:45:45.079014   49121 start.go:936] validating driver "kvm2" against <nil>
	I1002 20:45:45.079035   49121 start.go:947] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:45:45.080750   49121 out.go:203] 
	W1002 20:45:45.081898   49121 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 20:45:45.082974   49121 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-446943 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-446943" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 20:44:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.218:8443
name: pause-762562
contexts:
- context:
cluster: pause-762562
extensions:
- extension:
last-update: Thu, 02 Oct 2025 20:44:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-762562
name: pause-762562
current-context: ""
kind: Config
users:
- name: pause-762562
user:
client-certificate: /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.crt
client-key: /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-446943

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-446943"

                                                
                                                
----------------------- debugLogs end: false-446943 [took: 3.087401267s] --------------------------------
helpers_test.go:175: Cleaning up "false-446943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-446943
--- PASS: TestNetworkPlugins/group/false (3.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (27.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-555034 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-555034 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (27.370827886s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (27.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-555034 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-555034 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.087372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (133.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.932675751 start -p stopped-upgrade-485667 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.932675751 start -p stopped-upgrade-485667 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.269956213s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.932675751 -p stopped-upgrade-485667 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.932675751 -p stopped-upgrade-485667 stop: (2.043393077s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-485667 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-485667 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m8.788674155s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (133.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (107.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-139425 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-139425 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m47.990281411s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (107.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-485667
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-687524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1002 20:48:31.373617   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-687524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m34.049501871s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-462918 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-462918 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m40.563176243s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-139425 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [650016a7-b808-4e6f-97e9-962a34440ecd] Pending
helpers_test.go:352: "busybox" [650016a7-b808-4e6f-97e9-962a34440ecd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [650016a7-b808-4e6f-97e9-962a34440ecd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004410048s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-139425 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-139425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-139425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.134299734s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-139425 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (84.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-139425 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-139425 --alsologtostderr -v=3: (1m24.288999512s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (84.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-687524 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [116d06ef-9176-487d-82b4-0bfbee9f2a41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [116d06ef-9176-487d-82b4-0bfbee9f2a41] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005036246s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-687524 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-687524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-687524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.112246703s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-687524 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (88.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-687524 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-687524 --alsologtostderr -v=3: (1m28.459452188s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (88.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-462918 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4ea9f94d-5032-4e08-b1c6-6ec1e519b07e] Pending
helpers_test.go:352: "busybox" [4ea9f94d-5032-4e08-b1c6-6ec1e519b07e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4ea9f94d-5032-4e08-b1c6-6ec1e519b07e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004817949s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-462918 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-462918 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-462918 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.023031479s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-462918 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (89.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-462918 --alsologtostderr -v=3
E1002 20:51:00.269574   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-462918 --alsologtostderr -v=3: (1m29.215649294s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (89.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-139425 -n old-k8s-version-139425
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-139425 -n old-k8s-version-139425: exit status 7 (77.639339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-139425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-139425 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-139425 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (45.32100969s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-139425 -n old-k8s-version-139425
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (105.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-928792 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-928792 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m45.018860658s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (105.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687524 -n embed-certs-687524
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687524 -n embed-certs-687524: exit status 7 (89.251347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-687524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (66.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-687524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-687524 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m6.358155663s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-687524 -n embed-certs-687524
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (66.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6bqqj" [006144a4-029e-41d8-a253-85d779944064] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6bqqj" [006144a4-029e-41d8-a253-85d779944064] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.00591414s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-462918 -n no-preload-462918
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-462918 -n no-preload-462918: exit status 7 (85.765681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-462918 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (66.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-462918 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-462918 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m6.310618043s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-462918 -n no-preload-462918
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (66.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-6bqqj" [006144a4-029e-41d8-a253-85d779944064] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004737025s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-139425 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-139425 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-139425 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-139425 --alsologtostderr -v=1: (1.075456919s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-139425 -n old-k8s-version-139425
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-139425 -n old-k8s-version-139425: exit status 2 (347.245505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-139425 -n old-k8s-version-139425
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-139425 -n old-k8s-version-139425: exit status 2 (345.706097ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-139425 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-139425 --alsologtostderr -v=1: (1.221249975s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-139425 -n old-k8s-version-139425
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-139425 -n old-k8s-version-139425
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-383705 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-383705 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (54.878935743s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9dlh4" [1fa32642-01fd-4687-8541-305594af1a17] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9dlh4" [1fa32642-01fd-4687-8541-305594af1a17] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.007335058s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9dlh4" [1fa32642-01fd-4687-8541-305594af1a17] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004499783s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-687524 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-928792 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [57ce0b44-ac28-489a-9212-56be03e08055] Pending
helpers_test.go:352: "busybox" [57ce0b44-ac28-489a-9212-56be03e08055] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [57ce0b44-ac28-489a-9212-56be03e08055] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004524872s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-928792 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-687524 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-687524 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-687524 -n embed-certs-687524
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-687524 -n embed-certs-687524: exit status 2 (286.680409ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-687524 -n embed-certs-687524
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-687524 -n embed-certs-687524: exit status 2 (285.469469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-687524 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-687524 --alsologtostderr -v=1: (1.030645583s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-687524 -n embed-certs-687524
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-687524 -n embed-certs-687524
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-928792 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-928792 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.314827394s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-928792 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (89.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-928792 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-928792 --alsologtostderr -v=3: (1m29.580324171s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (89.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zccfz" [52a029c4-30d0-4360-b27e-74402e044cd9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zccfz" [52a029c4-30d0-4360-b27e-74402e044cd9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.005443322s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-383705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-383705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.284754425s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (73.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-383705 --alsologtostderr -v=3
E1002 20:53:31.374289   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/functional-527118/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-383705 --alsologtostderr -v=3: (1m13.053701664s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (73.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zccfz" [52a029c4-30d0-4360-b27e-74402e044cd9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00521898s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-462918 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-462918 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-462918 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-462918 -n no-preload-462918
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-462918 -n no-preload-462918: exit status 2 (252.296396ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-462918 -n no-preload-462918
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-462918 -n no-preload-462918: exit status 2 (245.048124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-462918 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-462918 -n no-preload-462918
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-462918 -n no-preload-462918
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.007417495s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (103.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m43.777868712s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (103.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-383705 -n newest-cni-383705
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-383705 -n newest-cni-383705: exit status 7 (64.528343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-383705 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-383705 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1002 20:54:44.874976   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:44.881450   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:44.892983   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:44.914537   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:44.956060   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:45.037822   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:45.199955   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:45.521747   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:46.163630   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:47.445052   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:54:50.006988   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-383705 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (37.314161676s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-383705 -n newest-cni-383705
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-928792 -n default-k8s-diff-port-928792
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-928792 -n default-k8s-diff-port-928792: exit status 7 (78.759995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-928792 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-928792 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1002 20:54:55.129004   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:55:05.370994   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-928792 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (56.63229889s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-928792 -n default-k8s-diff-port-928792
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-446943 "pgrep -a kubelet"
I1002 20:55:18.388947   13449 config.go:182] Loaded profile config "auto-446943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-446943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fkn7j" [9e529708-8e6d-4dd8-bed9-3f0c538d03d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fkn7j" [9e529708-8e6d-4dd8-bed9-3f0c538d03d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006223191s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-383705 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-383705 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-383705 --alsologtostderr -v=1: (1.47114281s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-383705 -n newest-cni-383705
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-383705 -n newest-cni-383705: exit status 2 (415.693095ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-383705 -n newest-cni-383705
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-383705 -n newest-cni-383705: exit status 2 (384.225454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-383705 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-383705 --alsologtostderr -v=1: (1.172824239s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-383705 -n newest-cni-383705
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-383705 -n newest-cni-383705
E1002 20:55:25.853061   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.53987221s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-446943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-vghgh" [2401dc06-ae32-4360-8712-b7087f262f56] Running
E1002 20:55:43.433279   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/no-preload-462918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004536541s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m17.960520079s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-q5t96" [b1a89b49-2734-4198-bb5f-ffb08059622c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1002 20:55:48.555704   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/no-preload-462918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-q5t96" [b1a89b49-2734-4198-bb5f-ffb08059622c] Running
E1002 20:56:06.815446   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/old-k8s-version-139425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.004077084s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-446943 "pgrep -a kubelet"
I1002 20:55:48.833806   13449 config.go:182] Loaded profile config "kindnet-446943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-446943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c95bs" [283a42fc-b7ff-49ac-8a65-92f4ed42f9a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c95bs" [283a42fc-b7ff-49ac-8a65-92f4ed42f9a0] Running
E1002 20:55:58.797060   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/no-preload-462918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007219288s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-446943 exec deployment/netcat -- nslookup kubernetes.default
E1002 20:56:00.269540   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/addons-355008/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-q5t96" [b1a89b49-2734-4198-bb5f-ffb08059622c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003984232s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-928792 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-928792 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-928792 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-928792 --alsologtostderr -v=1: (1.234003633s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-928792 -n default-k8s-diff-port-928792
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-928792 -n default-k8s-diff-port-928792: exit status 2 (364.751801ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-928792 -n default-k8s-diff-port-928792
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-928792 -n default-k8s-diff-port-928792: exit status 2 (379.738546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-928792 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-928792 --alsologtostderr -v=1: (1.160409643s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-928792 -n default-k8s-diff-port-928792
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-928792 -n default-k8s-diff-port-928792
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (81.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1002 20:56:19.279043   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/no-preload-462918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.37658177s)
--- PASS: TestNetworkPlugins/group/flannel/Start (81.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (100.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m40.599784509s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (100.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-446943 "pgrep -a kubelet"
I1002 20:56:53.596508   13449 config.go:182] Loaded profile config "enable-default-cni-446943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-446943 replace --force -f testdata/netcat-deployment.yaml
I1002 20:56:54.293352   13449 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1002 20:56:54.319798   13449 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w9sw7" [ef6f5547-9a17-431f-abba-0a03fa80e87d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 20:57:00.240374   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/no-preload-462918/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-w9sw7" [ef6f5547-9a17-431f-abba-0a03fa80e87d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004949282s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-wzgxn" [cae7380d-1afc-4a41-b487-3eb7aa8321c5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-wzgxn" [cae7380d-1afc-4a41-b487-3eb7aa8321c5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.045112076s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-446943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-446943 "pgrep -a kubelet"
I1002 20:57:11.285322   13449 config.go:182] Loaded profile config "calico-446943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-446943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-446943 replace --force -f testdata/netcat-deployment.yaml: (1.0665514s)
I1002 20:57:12.369952   13449 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1002 20:57:12.406317   13449 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9rjvl" [7af7f9d3-f405-41f4-a5bb-1dc9cf6c0fbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9rjvl" [7af7f9d3-f405-41f4-a5bb-1dc9cf6c0fbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004769554s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-446943 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.651629247s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-446943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-lqrsw" [d9eef1ab-9cc7-4bb3-8b7b-7424f744db48] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004795094s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-446943 "pgrep -a kubelet"
I1002 20:57:45.430559   13449 config.go:182] Loaded profile config "flannel-446943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-446943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hffjw" [dc93c164-fe88-436e-ae0b-1766bb0ceaf7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hffjw" [dc93c164-fe88-436e-ae0b-1766bb0ceaf7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003273272s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-446943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-446943 "pgrep -a kubelet"
I1002 20:58:01.214181   13449 config.go:182] Loaded profile config "custom-flannel-446943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-446943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bl2rc" [99457578-68cd-4b2c-b1c2-d534bf71abd9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bl2rc" [99457578-68cd-4b2c-b1c2-d534bf71abd9] Running
E1002 20:58:08.893811   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:58:08.900183   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:58:08.911624   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:58:08.933000   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:58:08.974557   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:58:09.056147   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:58:09.218342   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:58:09.541126   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:58:10.184159   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:58:11.466088   13449 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/default-k8s-diff-port-928792/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003662631s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-446943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-446943 "pgrep -a kubelet"
I1002 20:58:50.496352   13449 config.go:182] Loaded profile config "bridge-446943": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-446943 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l9ll6" [a3bd06c7-8706-42ed-8d59-210a71f211e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l9ll6" [a3bd06c7-8706-42ed-8d59-210a71f211e9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003406066s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-446943 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-446943 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
261 TestStartStop/group/disable-driver-mounts 0.16
273 TestNetworkPlugins/group/kubenet 3.12
283 TestNetworkPlugins/group/cilium 3.85
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-355008 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-850164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-850164
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-446943 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-446943" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 20:44:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.218:8443
name: pause-762562
contexts:
- context:
cluster: pause-762562
extensions:
- extension:
last-update: Thu, 02 Oct 2025 20:44:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-762562
name: pause-762562
current-context: ""
kind: Config
users:
- name: pause-762562
user:
client-certificate: /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.crt
client-key: /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-446943

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-446943"

                                                
                                                
----------------------- debugLogs end: kubenet-446943 [took: 2.94122916s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-446943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-446943
--- SKIP: TestNetworkPlugins/group/kubenet (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-446943 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-446943" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-9524/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 20:44:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.218:8443
name: pause-762562
contexts:
- context:
cluster: pause-762562
extensions:
- extension:
last-update: Thu, 02 Oct 2025 20:44:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-762562
name: pause-762562
current-context: ""
kind: Config
users:
- name: pause-762562
user:
client-certificate: /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.crt
client-key: /home/jenkins/minikube-integration/21683-9524/.minikube/profiles/pause-762562/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-446943

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-446943" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446943"

                                                
                                                
----------------------- debugLogs end: cilium-446943 [took: 3.666211405s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-446943" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-446943
--- SKIP: TestNetworkPlugins/group/cilium (3.85s)

                                                
                                    
Copied to clipboard