Test Report: KVM_Linux_containerd 20288

                    
                      ced131f14e611cbeeb9356239cf0040c87f16008:2025-01-22:38026
                    
                

Test fail (3/316)

Order failed test Duration
244 TestPreload 46.15
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 1801.65
312 TestStartStop/group/no-preload/serial/SecondStart 1598.13
x
+
TestPreload (46.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-159708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0122 20:51:35.695935  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-159708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: exit status 100 (44.657496894s)

                                                
                                                
-- stdout --
	* [test-preload-159708] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "test-preload-159708" primary control-plane node in "test-preload-159708" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.24.4 on containerd 1.7.23 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:51:22.957630  188344 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:51:22.957718  188344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:51:22.957722  188344 out.go:358] Setting ErrFile to fd 2...
	I0122 20:51:22.957727  188344 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:51:22.957887  188344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 20:51:22.958500  188344 out.go:352] Setting JSON to false
	I0122 20:51:22.959457  188344 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9218,"bootTime":1737569865,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 20:51:22.959547  188344 start.go:139] virtualization: kvm guest
	I0122 20:51:22.961562  188344 out.go:177] * [test-preload-159708] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 20:51:22.962903  188344 notify.go:220] Checking for updates...
	I0122 20:51:22.962906  188344 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 20:51:22.964209  188344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 20:51:22.965415  188344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 20:51:22.966524  188344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 20:51:22.967729  188344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 20:51:22.969060  188344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 20:51:22.970344  188344 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 20:51:23.006793  188344 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 20:51:23.007933  188344 start.go:297] selected driver: kvm2
	I0122 20:51:23.007948  188344 start.go:901] validating driver "kvm2" against <nil>
	I0122 20:51:23.007958  188344 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 20:51:23.008777  188344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.008881  188344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-150966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 20:51:23.024387  188344 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 20:51:23.024433  188344 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0122 20:51:23.024647  188344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 20:51:23.024678  188344 cni.go:84] Creating CNI manager for ""
	I0122 20:51:23.024726  188344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 20:51:23.024735  188344 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 20:51:23.024774  188344 start.go:340] cluster config:
	{Name:test-preload-159708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-159708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 20:51:23.024870  188344 iso.go:125] acquiring lock: {Name:mkc3bf0604e328871936621dd0e0cda10261a449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.026511  188344 out.go:177] * Starting "test-preload-159708" primary control-plane node in "test-preload-159708" cluster
	I0122 20:51:23.027775  188344 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0122 20:51:23.028035  188344 cache.go:107] acquiring lock: {Name:mk1665b4cc1b6a34fd0403159eee7d0dca4e7cc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.028056  188344 cache.go:107] acquiring lock: {Name:mk78d2f1713da5613903a345eba3a750bc36e23f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.028058  188344 cache.go:107] acquiring lock: {Name:mka2650751f71d993171f4ad9b37c37cdeb31da1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.028100  188344 cache.go:107] acquiring lock: {Name:mk591cce79ae6372b83fcbac4a4da005e7893570 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.028113  188344 cache.go:107] acquiring lock: {Name:mke2720ea49830336796318b83f6eab272efd2a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.028142  188344 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/test-preload-159708/config.json ...
	I0122 20:51:23.028160  188344 cache.go:107] acquiring lock: {Name:mke8e367394d76cadc438e2195bc05eedc065b7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.028174  188344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/test-preload-159708/config.json: {Name:mkc9c9c2115a05052d4d5bf2a98940e4e69b31f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 20:51:23.028175  188344 cache.go:107] acquiring lock: {Name:mk19a90a4fa7cd8fc858edf2379fc10b8e45b330 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.028249  188344 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 20:51:23.028250  188344 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 20:51:23.028242  188344 cache.go:107] acquiring lock: {Name:mk57c8da366a354b038860542b064f19cbd7f7f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 20:51:23.028282  188344 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 20:51:23.028313  188344 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:51:23.028319  188344 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0122 20:51:23.028360  188344 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0122 20:51:23.028361  188344 start.go:360] acquireMachinesLock for test-preload-159708: {Name:mkde076c0ff5ffaed1ac7d9ac4f697ecfb6e2cf2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 20:51:23.028396  188344 start.go:364] duration metric: took 19.039µs to acquireMachinesLock for "test-preload-159708"
	I0122 20:51:23.028421  188344 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 20:51:23.028415  188344 start.go:93] Provisioning new machine with config: &{Name:test-preload-159708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-15
9708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0122 20:51:23.028558  188344 start.go:125] createHost starting for "" (driver="kvm2")
	I0122 20:51:23.029012  188344 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 20:51:23.029867  188344 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 20:51:23.029894  188344 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:51:23.029873  188344 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0122 20:51:23.029949  188344 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 20:51:23.029981  188344 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 20:51:23.030061  188344 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 20:51:23.029875  188344 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0122 20:51:23.029879  188344 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 20:51:23.030396  188344 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0122 20:51:23.030537  188344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:51:23.030587  188344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:51:23.046316  188344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0122 20:51:23.046815  188344 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:51:23.047414  188344 main.go:141] libmachine: Using API Version  1
	I0122 20:51:23.047441  188344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:51:23.047763  188344 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:51:23.047948  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetMachineName
	I0122 20:51:23.048076  188344 main.go:141] libmachine: (test-preload-159708) Calling .DriverName
	I0122 20:51:23.048231  188344 start.go:159] libmachine.API.Create for "test-preload-159708" (driver="kvm2")
	I0122 20:51:23.048261  188344 client.go:168] LocalClient.Create starting
	I0122 20:51:23.048295  188344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem
	I0122 20:51:23.048330  188344 main.go:141] libmachine: Decoding PEM data...
	I0122 20:51:23.048347  188344 main.go:141] libmachine: Parsing certificate...
	I0122 20:51:23.048408  188344 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem
	I0122 20:51:23.048437  188344 main.go:141] libmachine: Decoding PEM data...
	I0122 20:51:23.048455  188344 main.go:141] libmachine: Parsing certificate...
	I0122 20:51:23.048479  188344 main.go:141] libmachine: Running pre-create checks...
	I0122 20:51:23.048491  188344 main.go:141] libmachine: (test-preload-159708) Calling .PreCreateCheck
	I0122 20:51:23.048829  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetConfigRaw
	I0122 20:51:23.049169  188344 main.go:141] libmachine: Creating machine...
	I0122 20:51:23.049182  188344 main.go:141] libmachine: (test-preload-159708) Calling .Create
	I0122 20:51:23.049308  188344 main.go:141] libmachine: (test-preload-159708) creating KVM machine...
	I0122 20:51:23.049336  188344 main.go:141] libmachine: (test-preload-159708) creating network...
	I0122 20:51:23.050564  188344 main.go:141] libmachine: (test-preload-159708) DBG | found existing default KVM network
	I0122 20:51:23.051260  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:23.051112  188368 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091c0}
	I0122 20:51:23.051290  188344 main.go:141] libmachine: (test-preload-159708) DBG | created network xml: 
	I0122 20:51:23.051304  188344 main.go:141] libmachine: (test-preload-159708) DBG | <network>
	I0122 20:51:23.051321  188344 main.go:141] libmachine: (test-preload-159708) DBG |   <name>mk-test-preload-159708</name>
	I0122 20:51:23.051337  188344 main.go:141] libmachine: (test-preload-159708) DBG |   <dns enable='no'/>
	I0122 20:51:23.051346  188344 main.go:141] libmachine: (test-preload-159708) DBG |   
	I0122 20:51:23.051358  188344 main.go:141] libmachine: (test-preload-159708) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0122 20:51:23.051377  188344 main.go:141] libmachine: (test-preload-159708) DBG |     <dhcp>
	I0122 20:51:23.051392  188344 main.go:141] libmachine: (test-preload-159708) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0122 20:51:23.051405  188344 main.go:141] libmachine: (test-preload-159708) DBG |     </dhcp>
	I0122 20:51:23.051423  188344 main.go:141] libmachine: (test-preload-159708) DBG |   </ip>
	I0122 20:51:23.051435  188344 main.go:141] libmachine: (test-preload-159708) DBG |   
	I0122 20:51:23.051448  188344 main.go:141] libmachine: (test-preload-159708) DBG | </network>
	I0122 20:51:23.051461  188344 main.go:141] libmachine: (test-preload-159708) DBG | 
	I0122 20:51:23.056499  188344 main.go:141] libmachine: (test-preload-159708) DBG | trying to create private KVM network mk-test-preload-159708 192.168.39.0/24...
	I0122 20:51:23.128474  188344 main.go:141] libmachine: (test-preload-159708) DBG | private KVM network mk-test-preload-159708 192.168.39.0/24 created
	I0122 20:51:23.128512  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:23.128459  188368 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 20:51:23.128527  188344 main.go:141] libmachine: (test-preload-159708) setting up store path in /home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708 ...
	I0122 20:51:23.128544  188344 main.go:141] libmachine: (test-preload-159708) building disk image from file:///home/jenkins/minikube-integration/20288-150966/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 20:51:23.128727  188344 main.go:141] libmachine: (test-preload-159708) Downloading /home/jenkins/minikube-integration/20288-150966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20288-150966/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0122 20:51:23.242249  188344 cache.go:162] opening:  /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0122 20:51:23.249137  188344 cache.go:162] opening:  /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0122 20:51:23.254769  188344 cache.go:162] opening:  /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0122 20:51:23.267654  188344 cache.go:162] opening:  /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0122 20:51:23.274217  188344 cache.go:162] opening:  /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0122 20:51:23.282394  188344 cache.go:162] opening:  /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0122 20:51:23.289115  188344 cache.go:162] opening:  /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0122 20:51:23.336573  188344 cache.go:157] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 exists
	I0122 20:51:23.336606  188344 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7" took 308.519497ms
	I0122 20:51:23.336629  188344 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 succeeded
	I0122 20:51:23.394601  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:23.394481  188368 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/id_rsa...
	I0122 20:51:23.523601  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:23.523528  188368 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/test-preload-159708.rawdisk...
	I0122 20:51:23.523638  188344 main.go:141] libmachine: (test-preload-159708) DBG | Writing magic tar header
	I0122 20:51:23.523658  188344 main.go:141] libmachine: (test-preload-159708) DBG | Writing SSH key tar header
	I0122 20:51:23.523722  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:23.523678  188368 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708 ...
	I0122 20:51:23.523853  188344 main.go:141] libmachine: (test-preload-159708) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708
	I0122 20:51:23.523877  188344 main.go:141] libmachine: (test-preload-159708) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708 (perms=drwx------)
	I0122 20:51:23.523889  188344 main.go:141] libmachine: (test-preload-159708) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube/machines
	I0122 20:51:23.523903  188344 main.go:141] libmachine: (test-preload-159708) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 20:51:23.523917  188344 main.go:141] libmachine: (test-preload-159708) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube/machines (perms=drwxr-xr-x)
	I0122 20:51:23.523923  188344 main.go:141] libmachine: (test-preload-159708) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966
	I0122 20:51:23.523936  188344 main.go:141] libmachine: (test-preload-159708) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0122 20:51:23.523945  188344 main.go:141] libmachine: (test-preload-159708) DBG | checking permissions on dir: /home/jenkins
	I0122 20:51:23.523970  188344 main.go:141] libmachine: (test-preload-159708) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube (perms=drwxr-xr-x)
	I0122 20:51:23.523984  188344 main.go:141] libmachine: (test-preload-159708) DBG | checking permissions on dir: /home
	I0122 20:51:23.523996  188344 main.go:141] libmachine: (test-preload-159708) setting executable bit set on /home/jenkins/minikube-integration/20288-150966 (perms=drwxrwxr-x)
	I0122 20:51:23.524005  188344 main.go:141] libmachine: (test-preload-159708) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0122 20:51:23.524011  188344 main.go:141] libmachine: (test-preload-159708) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0122 20:51:23.524018  188344 main.go:141] libmachine: (test-preload-159708) creating domain...
	I0122 20:51:23.524025  188344 main.go:141] libmachine: (test-preload-159708) DBG | skipping /home - not owner
	I0122 20:51:23.525261  188344 main.go:141] libmachine: (test-preload-159708) define libvirt domain using xml: 
	I0122 20:51:23.525275  188344 main.go:141] libmachine: (test-preload-159708) <domain type='kvm'>
	I0122 20:51:23.525290  188344 main.go:141] libmachine: (test-preload-159708)   <name>test-preload-159708</name>
	I0122 20:51:23.525297  188344 main.go:141] libmachine: (test-preload-159708)   <memory unit='MiB'>2200</memory>
	I0122 20:51:23.525306  188344 main.go:141] libmachine: (test-preload-159708)   <vcpu>2</vcpu>
	I0122 20:51:23.525319  188344 main.go:141] libmachine: (test-preload-159708)   <features>
	I0122 20:51:23.525327  188344 main.go:141] libmachine: (test-preload-159708)     <acpi/>
	I0122 20:51:23.525336  188344 main.go:141] libmachine: (test-preload-159708)     <apic/>
	I0122 20:51:23.525345  188344 main.go:141] libmachine: (test-preload-159708)     <pae/>
	I0122 20:51:23.525352  188344 main.go:141] libmachine: (test-preload-159708)     
	I0122 20:51:23.525379  188344 main.go:141] libmachine: (test-preload-159708)   </features>
	I0122 20:51:23.525403  188344 main.go:141] libmachine: (test-preload-159708)   <cpu mode='host-passthrough'>
	I0122 20:51:23.525430  188344 main.go:141] libmachine: (test-preload-159708)   
	I0122 20:51:23.525451  188344 main.go:141] libmachine: (test-preload-159708)   </cpu>
	I0122 20:51:23.525461  188344 main.go:141] libmachine: (test-preload-159708)   <os>
	I0122 20:51:23.525468  188344 main.go:141] libmachine: (test-preload-159708)     <type>hvm</type>
	I0122 20:51:23.525477  188344 main.go:141] libmachine: (test-preload-159708)     <boot dev='cdrom'/>
	I0122 20:51:23.525485  188344 main.go:141] libmachine: (test-preload-159708)     <boot dev='hd'/>
	I0122 20:51:23.525500  188344 main.go:141] libmachine: (test-preload-159708)     <bootmenu enable='no'/>
	I0122 20:51:23.525507  188344 main.go:141] libmachine: (test-preload-159708)   </os>
	I0122 20:51:23.525526  188344 main.go:141] libmachine: (test-preload-159708)   <devices>
	I0122 20:51:23.525537  188344 main.go:141] libmachine: (test-preload-159708)     <disk type='file' device='cdrom'>
	I0122 20:51:23.525547  188344 main.go:141] libmachine: (test-preload-159708)       <source file='/home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/boot2docker.iso'/>
	I0122 20:51:23.525557  188344 main.go:141] libmachine: (test-preload-159708)       <target dev='hdc' bus='scsi'/>
	I0122 20:51:23.525563  188344 main.go:141] libmachine: (test-preload-159708)       <readonly/>
	I0122 20:51:23.525572  188344 main.go:141] libmachine: (test-preload-159708)     </disk>
	I0122 20:51:23.525579  188344 main.go:141] libmachine: (test-preload-159708)     <disk type='file' device='disk'>
	I0122 20:51:23.525591  188344 main.go:141] libmachine: (test-preload-159708)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0122 20:51:23.525603  188344 main.go:141] libmachine: (test-preload-159708)       <source file='/home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/test-preload-159708.rawdisk'/>
	I0122 20:51:23.525611  188344 main.go:141] libmachine: (test-preload-159708)       <target dev='hda' bus='virtio'/>
	I0122 20:51:23.525617  188344 main.go:141] libmachine: (test-preload-159708)     </disk>
	I0122 20:51:23.525626  188344 main.go:141] libmachine: (test-preload-159708)     <interface type='network'>
	I0122 20:51:23.525634  188344 main.go:141] libmachine: (test-preload-159708)       <source network='mk-test-preload-159708'/>
	I0122 20:51:23.525644  188344 main.go:141] libmachine: (test-preload-159708)       <model type='virtio'/>
	I0122 20:51:23.525653  188344 main.go:141] libmachine: (test-preload-159708)     </interface>
	I0122 20:51:23.525662  188344 main.go:141] libmachine: (test-preload-159708)     <interface type='network'>
	I0122 20:51:23.525677  188344 main.go:141] libmachine: (test-preload-159708)       <source network='default'/>
	I0122 20:51:23.525714  188344 main.go:141] libmachine: (test-preload-159708)       <model type='virtio'/>
	I0122 20:51:23.525733  188344 main.go:141] libmachine: (test-preload-159708)     </interface>
	I0122 20:51:23.525745  188344 main.go:141] libmachine: (test-preload-159708)     <serial type='pty'>
	I0122 20:51:23.525767  188344 main.go:141] libmachine: (test-preload-159708)       <target port='0'/>
	I0122 20:51:23.525774  188344 main.go:141] libmachine: (test-preload-159708)     </serial>
	I0122 20:51:23.525787  188344 main.go:141] libmachine: (test-preload-159708)     <console type='pty'>
	I0122 20:51:23.525795  188344 main.go:141] libmachine: (test-preload-159708)       <target type='serial' port='0'/>
	I0122 20:51:23.525808  188344 main.go:141] libmachine: (test-preload-159708)     </console>
	I0122 20:51:23.525840  188344 main.go:141] libmachine: (test-preload-159708)     <rng model='virtio'>
	I0122 20:51:23.525856  188344 main.go:141] libmachine: (test-preload-159708)       <backend model='random'>/dev/random</backend>
	I0122 20:51:23.525866  188344 main.go:141] libmachine: (test-preload-159708)     </rng>
	I0122 20:51:23.525873  188344 main.go:141] libmachine: (test-preload-159708)     
	I0122 20:51:23.525881  188344 main.go:141] libmachine: (test-preload-159708)     
	I0122 20:51:23.525888  188344 main.go:141] libmachine: (test-preload-159708)   </devices>
	I0122 20:51:23.525895  188344 main.go:141] libmachine: (test-preload-159708) </domain>
	I0122 20:51:23.525901  188344 main.go:141] libmachine: (test-preload-159708) 
	I0122 20:51:23.530435  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:f2:66:cc in network default
	I0122 20:51:23.531167  188344 main.go:141] libmachine: (test-preload-159708) starting domain...
	I0122 20:51:23.531215  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:23.531228  188344 main.go:141] libmachine: (test-preload-159708) ensuring networks are active...
	I0122 20:51:23.531980  188344 main.go:141] libmachine: (test-preload-159708) Ensuring network default is active
	I0122 20:51:23.532392  188344 main.go:141] libmachine: (test-preload-159708) Ensuring network mk-test-preload-159708 is active
	I0122 20:51:23.532977  188344 main.go:141] libmachine: (test-preload-159708) getting domain XML...
	I0122 20:51:23.533879  188344 main.go:141] libmachine: (test-preload-159708) creating domain...
	I0122 20:51:23.740140  188344 cache.go:157] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0122 20:51:23.740169  188344 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6" took 711.930527ms
	I0122 20:51:23.740184  188344 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0122 20:51:23.829273  188344 cache.go:157] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0122 20:51:23.829304  188344 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4" took 801.208346ms
	I0122 20:51:23.829363  188344 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0122 20:51:23.869398  188344 cache.go:157] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0122 20:51:23.869435  188344 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4" took 841.396112ms
	I0122 20:51:23.869452  188344 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0122 20:51:23.957497  188344 cache.go:157] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0122 20:51:23.957635  188344 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4" took 929.620327ms
	I0122 20:51:23.957673  188344 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0122 20:51:24.062492  188344 cache.go:157] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0122 20:51:24.062519  188344 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4" took 1.034356873s
	I0122 20:51:24.062531  188344 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0122 20:51:24.142101  188344 cache.go:162] opening:  /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0122 20:51:24.451680  188344 cache.go:157] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0122 20:51:24.451711  188344 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.423670032s
	I0122 20:51:24.451723  188344 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0122 20:51:24.884743  188344 cache.go:157] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 exists
	I0122 20:51:24.884773  188344 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0" took 1.8566131s
	I0122 20:51:24.884791  188344 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0122 20:51:24.884810  188344 cache.go:87] Successfully saved all images to host disk.
	I0122 20:51:24.910925  188344 main.go:141] libmachine: (test-preload-159708) waiting for IP...
	I0122 20:51:24.911750  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:24.912155  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:24.912219  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:24.912143  188368 retry.go:31] will retry after 261.264182ms: waiting for domain to come up
	I0122 20:51:25.174892  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:25.175273  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:25.175303  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:25.175224  188368 retry.go:31] will retry after 257.566437ms: waiting for domain to come up
	I0122 20:51:25.435828  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:25.436352  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:25.436376  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:25.436325  188368 retry.go:31] will retry after 464.578746ms: waiting for domain to come up
	I0122 20:51:25.903025  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:25.903598  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:25.903630  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:25.903574  188368 retry.go:31] will retry after 468.119665ms: waiting for domain to come up
	I0122 20:51:26.373142  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:26.373583  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:26.373616  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:26.373536  188368 retry.go:31] will retry after 611.134699ms: waiting for domain to come up
	I0122 20:51:26.986318  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:26.986689  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:26.986729  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:26.986679  188368 retry.go:31] will retry after 833.661702ms: waiting for domain to come up
	I0122 20:51:27.821657  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:27.822177  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:27.822199  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:27.822148  188368 retry.go:31] will retry after 1.102739291s: waiting for domain to come up
	I0122 20:51:28.926258  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:28.926703  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:28.926733  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:28.926665  188368 retry.go:31] will retry after 912.677954ms: waiting for domain to come up
	I0122 20:51:29.840780  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:29.841162  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:29.841186  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:29.841134  188368 retry.go:31] will retry after 1.826688053s: waiting for domain to come up
	I0122 20:51:31.670067  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:31.670544  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:31.670584  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:31.670471  188368 retry.go:31] will retry after 1.722527014s: waiting for domain to come up
	I0122 20:51:33.395072  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:33.395546  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:33.395571  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:33.395512  188368 retry.go:31] will retry after 2.067913616s: waiting for domain to come up
	I0122 20:51:35.465842  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:35.466321  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:35.466349  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:35.466278  188368 retry.go:31] will retry after 3.226442333s: waiting for domain to come up
	I0122 20:51:38.694247  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:38.694680  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:38.694718  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:38.694645  188368 retry.go:31] will retry after 2.753521074s: waiting for domain to come up
	I0122 20:51:41.451684  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:41.452047  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find current IP address of domain test-preload-159708 in network mk-test-preload-159708
	I0122 20:51:41.452066  188344 main.go:141] libmachine: (test-preload-159708) DBG | I0122 20:51:41.452025  188368 retry.go:31] will retry after 4.19181868s: waiting for domain to come up
	I0122 20:51:45.647109  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:45.647544  188344 main.go:141] libmachine: (test-preload-159708) found domain IP: 192.168.39.64
	I0122 20:51:45.647565  188344 main.go:141] libmachine: (test-preload-159708) reserving static IP address...
	I0122 20:51:45.647579  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has current primary IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:45.647981  188344 main.go:141] libmachine: (test-preload-159708) DBG | unable to find host DHCP lease matching {name: "test-preload-159708", mac: "52:54:00:8f:59:8e", ip: "192.168.39.64"} in network mk-test-preload-159708
	I0122 20:51:45.719921  188344 main.go:141] libmachine: (test-preload-159708) reserved static IP address 192.168.39.64 for domain test-preload-159708
	I0122 20:51:45.719951  188344 main.go:141] libmachine: (test-preload-159708) waiting for SSH...
	I0122 20:51:45.719964  188344 main.go:141] libmachine: (test-preload-159708) DBG | Getting to WaitForSSH function...
	I0122 20:51:45.722732  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:45.723110  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:45.723145  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:45.723282  188344 main.go:141] libmachine: (test-preload-159708) DBG | Using SSH client type: external
	I0122 20:51:45.723308  188344 main.go:141] libmachine: (test-preload-159708) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/id_rsa (-rw-------)
	I0122 20:51:45.723347  188344 main.go:141] libmachine: (test-preload-159708) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 20:51:45.723363  188344 main.go:141] libmachine: (test-preload-159708) DBG | About to run SSH command:
	I0122 20:51:45.723379  188344 main.go:141] libmachine: (test-preload-159708) DBG | exit 0
	I0122 20:51:45.849755  188344 main.go:141] libmachine: (test-preload-159708) DBG | SSH cmd err, output: <nil>: 
	I0122 20:51:45.850092  188344 main.go:141] libmachine: (test-preload-159708) KVM machine creation complete
	I0122 20:51:45.850411  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetConfigRaw
	I0122 20:51:45.850947  188344 main.go:141] libmachine: (test-preload-159708) Calling .DriverName
	I0122 20:51:45.851182  188344 main.go:141] libmachine: (test-preload-159708) Calling .DriverName
	I0122 20:51:45.851383  188344 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0122 20:51:45.851398  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetState
	I0122 20:51:45.852759  188344 main.go:141] libmachine: Detecting operating system of created instance...
	I0122 20:51:45.852775  188344 main.go:141] libmachine: Waiting for SSH to be available...
	I0122 20:51:45.852782  188344 main.go:141] libmachine: Getting to WaitForSSH function...
	I0122 20:51:45.852791  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:45.855017  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:45.855291  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:45.855329  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:45.855412  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHPort
	I0122 20:51:45.855584  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:45.855731  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:45.855892  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHUsername
	I0122 20:51:45.856073  188344 main.go:141] libmachine: Using SSH client type: native
	I0122 20:51:45.856292  188344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0122 20:51:45.856303  188344 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0122 20:51:45.965115  188344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 20:51:45.965138  188344 main.go:141] libmachine: Detecting the provisioner...
	I0122 20:51:45.965145  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:45.967946  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:45.968310  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:45.968342  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:45.968554  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHPort
	I0122 20:51:45.968757  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:45.968924  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:45.969036  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHUsername
	I0122 20:51:45.969167  188344 main.go:141] libmachine: Using SSH client type: native
	I0122 20:51:45.969352  188344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0122 20:51:45.969366  188344 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0122 20:51:46.082583  188344 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0122 20:51:46.082632  188344 main.go:141] libmachine: found compatible host: buildroot
	I0122 20:51:46.082637  188344 main.go:141] libmachine: Provisioning with buildroot...
	I0122 20:51:46.082645  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetMachineName
	I0122 20:51:46.082881  188344 buildroot.go:166] provisioning hostname "test-preload-159708"
	I0122 20:51:46.082907  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetMachineName
	I0122 20:51:46.083102  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:46.085757  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.086100  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.086128  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.086284  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHPort
	I0122 20:51:46.086476  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.086639  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.086774  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHUsername
	I0122 20:51:46.086910  188344 main.go:141] libmachine: Using SSH client type: native
	I0122 20:51:46.087105  188344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0122 20:51:46.087118  188344 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-159708 && echo "test-preload-159708" | sudo tee /etc/hostname
	I0122 20:51:46.212981  188344 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-159708
	
	I0122 20:51:46.213009  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:46.215770  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.216109  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.216136  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.216379  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHPort
	I0122 20:51:46.216585  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.216734  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.216897  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHUsername
	I0122 20:51:46.217048  188344 main.go:141] libmachine: Using SSH client type: native
	I0122 20:51:46.217207  188344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0122 20:51:46.217222  188344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-159708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-159708/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-159708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 20:51:46.338605  188344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 20:51:46.338644  188344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-150966/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-150966/.minikube}
	I0122 20:51:46.338664  188344 buildroot.go:174] setting up certificates
	I0122 20:51:46.338676  188344 provision.go:84] configureAuth start
	I0122 20:51:46.338686  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetMachineName
	I0122 20:51:46.338945  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetIP
	I0122 20:51:46.341434  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.341790  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.341823  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.341980  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:46.344085  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.344363  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.344387  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.344646  188344 provision.go:143] copyHostCerts
	I0122 20:51:46.344704  188344 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem, removing ...
	I0122 20:51:46.344716  188344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem
	I0122 20:51:46.344778  188344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem (1078 bytes)
	I0122 20:51:46.344869  188344 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem, removing ...
	I0122 20:51:46.344878  188344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem
	I0122 20:51:46.344902  188344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem (1123 bytes)
	I0122 20:51:46.344953  188344 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem, removing ...
	I0122 20:51:46.344960  188344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem
	I0122 20:51:46.344981  188344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem (1675 bytes)
	I0122 20:51:46.345028  188344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem org=jenkins.test-preload-159708 san=[127.0.0.1 192.168.39.64 localhost minikube test-preload-159708]
	I0122 20:51:46.497487  188344 provision.go:177] copyRemoteCerts
	I0122 20:51:46.497548  188344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 20:51:46.497576  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:46.500448  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.500822  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.500857  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.501091  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHPort
	I0122 20:51:46.501312  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.501485  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHUsername
	I0122 20:51:46.501620  188344 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/id_rsa Username:docker}
	I0122 20:51:46.588579  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 20:51:46.610834  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 20:51:46.632480  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0122 20:51:46.653519  188344 provision.go:87] duration metric: took 314.829856ms to configureAuth
	I0122 20:51:46.653555  188344 buildroot.go:189] setting minikube options for container-runtime
	I0122 20:51:46.653753  188344 config.go:182] Loaded profile config "test-preload-159708": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0122 20:51:46.653776  188344 main.go:141] libmachine: Checking connection to Docker...
	I0122 20:51:46.653798  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetURL
	I0122 20:51:46.654939  188344 main.go:141] libmachine: (test-preload-159708) DBG | using libvirt version 6000000
	I0122 20:51:46.656760  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.657064  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.657095  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.657262  188344 main.go:141] libmachine: Docker is up and running!
	I0122 20:51:46.657287  188344 main.go:141] libmachine: Reticulating splines...
	I0122 20:51:46.657297  188344 client.go:171] duration metric: took 23.609025186s to LocalClient.Create
	I0122 20:51:46.657327  188344 start.go:167] duration metric: took 23.609096748s to libmachine.API.Create "test-preload-159708"
	I0122 20:51:46.657340  188344 start.go:293] postStartSetup for "test-preload-159708" (driver="kvm2")
	I0122 20:51:46.657355  188344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 20:51:46.657377  188344 main.go:141] libmachine: (test-preload-159708) Calling .DriverName
	I0122 20:51:46.657602  188344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 20:51:46.657627  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:46.659524  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.659848  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.659880  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.659975  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHPort
	I0122 20:51:46.660164  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.660335  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHUsername
	I0122 20:51:46.660459  188344 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/id_rsa Username:docker}
	I0122 20:51:46.744077  188344 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 20:51:46.748178  188344 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 20:51:46.748201  188344 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/addons for local assets ...
	I0122 20:51:46.748276  188344 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/files for local assets ...
	I0122 20:51:46.748352  188344 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem -> 1582712.pem in /etc/ssl/certs
	I0122 20:51:46.748433  188344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 20:51:46.757594  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem --> /etc/ssl/certs/1582712.pem (1708 bytes)
	I0122 20:51:46.779771  188344 start.go:296] duration metric: took 122.414025ms for postStartSetup
	I0122 20:51:46.779824  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetConfigRaw
	I0122 20:51:46.780546  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetIP
	I0122 20:51:46.783445  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.783770  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.783815  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.784067  188344 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/test-preload-159708/config.json ...
	I0122 20:51:46.784237  188344 start.go:128] duration metric: took 23.755652597s to createHost
	I0122 20:51:46.784263  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:46.786618  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.786919  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.786946  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.787091  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHPort
	I0122 20:51:46.787262  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.787450  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.787566  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHUsername
	I0122 20:51:46.787717  188344 main.go:141] libmachine: Using SSH client type: native
	I0122 20:51:46.787914  188344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
	I0122 20:51:46.787928  188344 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 20:51:46.898614  188344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737579106.866980369
	
	I0122 20:51:46.898642  188344 fix.go:216] guest clock: 1737579106.866980369
	I0122 20:51:46.898650  188344 fix.go:229] Guest: 2025-01-22 20:51:46.866980369 +0000 UTC Remote: 2025-01-22 20:51:46.784249933 +0000 UTC m=+23.862560389 (delta=82.730436ms)
	I0122 20:51:46.898671  188344 fix.go:200] guest clock delta is within tolerance: 82.730436ms
	I0122 20:51:46.898676  188344 start.go:83] releasing machines lock for "test-preload-159708", held for 23.870272518s
	I0122 20:51:46.898698  188344 main.go:141] libmachine: (test-preload-159708) Calling .DriverName
	I0122 20:51:46.898993  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetIP
	I0122 20:51:46.901931  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.902321  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.902350  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.902494  188344 main.go:141] libmachine: (test-preload-159708) Calling .DriverName
	I0122 20:51:46.903070  188344 main.go:141] libmachine: (test-preload-159708) Calling .DriverName
	I0122 20:51:46.903255  188344 main.go:141] libmachine: (test-preload-159708) Calling .DriverName
	I0122 20:51:46.903328  188344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 20:51:46.903402  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:46.903450  188344 ssh_runner.go:195] Run: cat /version.json
	I0122 20:51:46.903476  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHHostname
	I0122 20:51:46.905838  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.906206  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.906231  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.906251  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.906419  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHPort
	I0122 20:51:46.906597  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.906624  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:46.906648  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:46.906742  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHUsername
	I0122 20:51:46.906845  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHPort
	I0122 20:51:46.906898  188344 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/id_rsa Username:docker}
	I0122 20:51:46.907015  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHKeyPath
	I0122 20:51:46.907163  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetSSHUsername
	I0122 20:51:46.907296  188344 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/test-preload-159708/id_rsa Username:docker}
	I0122 20:51:46.986869  188344 ssh_runner.go:195] Run: systemctl --version
	I0122 20:51:47.007176  188344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 20:51:47.012736  188344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 20:51:47.012819  188344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 20:51:47.029593  188344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 20:51:47.029619  188344 start.go:495] detecting cgroup driver to use...
	I0122 20:51:47.029679  188344 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 20:51:47.061432  188344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 20:51:47.074088  188344 docker.go:217] disabling cri-docker service (if available) ...
	I0122 20:51:47.074147  188344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 20:51:47.086978  188344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 20:51:47.099608  188344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 20:51:47.204929  188344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 20:51:47.344455  188344 docker.go:233] disabling docker service ...
	I0122 20:51:47.344553  188344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 20:51:47.357900  188344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 20:51:47.369942  188344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 20:51:47.509033  188344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 20:51:47.624393  188344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 20:51:47.638323  188344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 20:51:47.656408  188344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0122 20:51:47.666007  188344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 20:51:47.675611  188344 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 20:51:47.675681  188344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 20:51:47.685098  188344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 20:51:47.694693  188344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 20:51:47.704376  188344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 20:51:47.714046  188344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 20:51:47.723771  188344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 20:51:47.733657  188344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0122 20:51:47.743242  188344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0122 20:51:47.752803  188344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 20:51:47.761285  188344 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 20:51:47.761351  188344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 20:51:47.773936  188344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 20:51:47.782771  188344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 20:51:47.887860  188344 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 20:51:47.914750  188344 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0122 20:51:47.914835  188344 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 20:51:47.920088  188344 retry.go:31] will retry after 857.834211ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0122 20:51:48.778233  188344 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 20:51:48.783177  188344 start.go:563] Will wait 60s for crictl version
	I0122 20:51:48.783238  188344 ssh_runner.go:195] Run: which crictl
	I0122 20:51:48.786633  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 20:51:48.818347  188344 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0122 20:51:48.818412  188344 ssh_runner.go:195] Run: containerd --version
	I0122 20:51:48.840492  188344 ssh_runner.go:195] Run: containerd --version
	I0122 20:51:48.864943  188344 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.7.23 ...
	I0122 20:51:48.866323  188344 main.go:141] libmachine: (test-preload-159708) Calling .GetIP
	I0122 20:51:48.869004  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:48.869377  188344 main.go:141] libmachine: (test-preload-159708) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:59:8e", ip: ""} in network mk-test-preload-159708: {Iface:virbr1 ExpiryTime:2025-01-22 21:51:37 +0000 UTC Type:0 Mac:52:54:00:8f:59:8e Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:test-preload-159708 Clientid:01:52:54:00:8f:59:8e}
	I0122 20:51:48.869396  188344 main.go:141] libmachine: (test-preload-159708) DBG | domain test-preload-159708 has defined IP address 192.168.39.64 and MAC address 52:54:00:8f:59:8e in network mk-test-preload-159708
	I0122 20:51:48.869617  188344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0122 20:51:48.873445  188344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 20:51:48.884906  188344 kubeadm.go:883] updating cluster {Name:test-preload-159708 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-159708 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 20:51:48.885003  188344 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0122 20:51:48.885042  188344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 20:51:48.914363  188344 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0122 20:51:48.914391  188344 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0122 20:51:48.914432  188344 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:51:48.914464  188344 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 20:51:48.914480  188344 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 20:51:48.914493  188344 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0122 20:51:48.914525  188344 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0122 20:51:48.914528  188344 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 20:51:48.914463  188344 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 20:51:48.914573  188344 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 20:51:48.915686  188344 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0122 20:51:48.915704  188344 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 20:51:48.915730  188344 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 20:51:48.915692  188344 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 20:51:48.915790  188344 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 20:51:48.915693  188344 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0122 20:51:48.915693  188344 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:51:48.915818  188344 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 20:51:49.045155  188344 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.8.6" and sha "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03"
	I0122 20:51:49.045215  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.8.6
	I0122 20:51:49.057329  188344 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.24.4" and sha "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48"
	I0122 20:51:49.057387  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 20:51:49.058374  188344 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.24.4" and sha "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7"
	I0122 20:51:49.058419  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.24.4
	I0122 20:51:49.065559  188344 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.7" and sha "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165"
	I0122 20:51:49.065597  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.7
	I0122 20:51:49.074936  188344 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0122 20:51:49.074970  188344 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0122 20:51:49.074995  188344 ssh_runner.go:195] Run: which crictl
	I0122 20:51:49.080798  188344 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.5.3-0" and sha "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b"
	I0122 20:51:49.080849  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.5.3-0
	I0122 20:51:49.097852  188344 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.24.4" and sha "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d"
	I0122 20:51:49.097905  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.24.4
	I0122 20:51:49.098695  188344 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.24.4" and sha "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9"
	I0122 20:51:49.098737  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.24.4
	I0122 20:51:49.109185  188344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0122 20:51:49.109232  188344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 20:51:49.109263  188344 ssh_runner.go:195] Run: which crictl
	I0122 20:51:49.113423  188344 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0122 20:51:49.113453  188344 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0122 20:51:49.113480  188344 ssh_runner.go:195] Run: which crictl
	I0122 20:51:49.113523  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0122 20:51:49.113689  188344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0122 20:51:49.113724  188344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0122 20:51:49.113760  188344 ssh_runner.go:195] Run: which crictl
	I0122 20:51:49.146262  188344 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0122 20:51:49.146322  188344 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0122 20:51:49.146371  188344 ssh_runner.go:195] Run: which crictl
	I0122 20:51:49.156482  188344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0122 20:51:49.156533  188344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0122 20:51:49.156567  188344 ssh_runner.go:195] Run: which crictl
	I0122 20:51:49.156588  188344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0122 20:51:49.156611  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0122 20:51:49.156631  188344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0122 20:51:49.156669  188344 ssh_runner.go:195] Run: which crictl
	I0122 20:51:49.156569  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 20:51:49.183101  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0122 20:51:49.183101  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0122 20:51:49.183155  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0122 20:51:49.236082  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0122 20:51:49.236193  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0122 20:51:49.236215  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 20:51:49.236311  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0122 20:51:49.307781  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0122 20:51:49.307838  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0122 20:51:49.309967  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0122 20:51:49.400761  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0122 20:51:49.400806  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0122 20:51:49.400833  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0122 20:51:49.400895  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0122 20:51:49.442955  188344 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0122 20:51:49.443066  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0122 20:51:49.443080  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0122 20:51:49.443066  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0122 20:51:49.488497  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0122 20:51:49.517121  188344 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0122 20:51:49.517224  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0122 20:51:49.520731  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0122 20:51:49.520730  188344 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0122 20:51:49.520806  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0122 20:51:49.520828  188344 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0122 20:51:49.520886  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0122 20:51:49.612480  188344 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0122 20:51:49.612541  188344 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0122 20:51:49.612595  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0122 20:51:49.612692  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0122 20:51:49.616654  188344 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0122 20:51:49.616708  188344 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0122 20:51:49.616722  188344 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.24.4': No such file or directory
	I0122 20:51:49.616753  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 --> /var/lib/minikube/images/kube-controller-manager_v1.24.4 (31047168 bytes)
	I0122 20:51:49.616784  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0122 20:51:49.616756  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0122 20:51:49.616656  188344 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0122 20:51:49.616825  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0122 20:51:49.653665  188344 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0122 20:51:49.653725  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0122 20:51:49.653729  188344 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.24.4': No such file or directory
	I0122 20:51:49.653766  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 --> /var/lib/minikube/images/kube-proxy_v1.24.4 (39519744 bytes)
	I0122 20:51:49.654281  188344 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.24.4': No such file or directory
	I0122 20:51:49.654314  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 --> /var/lib/minikube/images/kube-apiserver_v1.24.4 (33814016 bytes)
	I0122 20:51:49.654323  188344 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.24.4: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.24.4': No such file or directory
	I0122 20:51:49.654344  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 --> /var/lib/minikube/images/kube-scheduler_v1.24.4 (15491584 bytes)
	I0122 20:51:49.739652  188344 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.7
	I0122 20:51:49.739725  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0122 20:51:49.791643  188344 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0122 20:51:49.791716  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:51:50.347710  188344 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0122 20:51:50.347758  188344 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0122 20:51:50.347776  188344 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0122 20:51:50.347814  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0122 20:51:50.347819  188344 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:51:50.347866  188344 ssh_runner.go:195] Run: which crictl
	I0122 20:51:51.468082  188344 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6: (1.120237838s)
	I0122 20:51:51.468119  188344 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0122 20:51:51.468127  188344 ssh_runner.go:235] Completed: which crictl: (1.120237984s)
	I0122 20:51:51.468147  188344 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0122 20:51:51.468184  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0122 20:51:51.468184  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:51:52.654492  188344 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.4: (1.186285221s)
	I0122 20:51:52.654519  188344 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0122 20:51:52.654519  188344 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.186312522s)
	I0122 20:51:52.654546  188344 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0122 20:51:52.654597  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:51:52.654605  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0122 20:51:54.594006  188344 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.4: (1.93937381s)
	I0122 20:51:54.594037  188344 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0122 20:51:54.594049  188344 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.939429778s)
	I0122 20:51:54.594075  188344 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0122 20:51:54.594133  188344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 20:51:54.594141  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0122 20:51:56.967677  188344 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.373513798s)
	I0122 20:51:56.967707  188344 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0122 20:51:56.967707  188344 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.37354597s)
	I0122 20:51:56.967740  188344 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0122 20:51:56.967773  188344 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0122 20:51:56.967792  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.4
	I0122 20:51:56.967868  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0122 20:51:56.972635  188344 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0122 20:51:56.972674  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0122 20:51:59.446665  188344 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.4: (2.478844928s)
	I0122 20:51:59.446693  188344 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0122 20:51:59.446716  188344 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0122 20:51:59.446764  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0122 20:52:05.020346  188344 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (5.573549926s)
	I0122 20:52:05.020377  188344 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0122 20:52:05.020407  188344 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0122 20:52:05.020456  188344 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0122 20:52:05.586012  188344 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0122 20:52:05.586061  188344 cache_images.go:123] Successfully loaded all cached images
	I0122 20:52:05.586069  188344 cache_images.go:92] duration metric: took 16.671666215s to LoadCachedImages
	I0122 20:52:05.586085  188344 kubeadm.go:934] updating node { 192.168.39.64 8443 v1.24.4 containerd true true} ...
	I0122 20:52:05.586212  188344 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-159708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-159708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 20:52:05.586278  188344 ssh_runner.go:195] Run: sudo crictl info
	I0122 20:52:05.619010  188344 cni.go:84] Creating CNI manager for ""
	I0122 20:52:05.619077  188344 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 20:52:05.619094  188344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 20:52:05.619120  188344 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-159708 NodeName:test-preload-159708 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 20:52:05.619251  188344 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "test-preload-159708"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 20:52:05.619336  188344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0122 20:52:05.628955  188344 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.24.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.24.4': No such file or directory
	
	Initiating transfer...
	I0122 20:52:05.629027  188344 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.24.4
	I0122 20:52:05.638273  188344 download.go:108] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/linux/amd64/v1.24.4/kubectl
	I0122 20:52:05.638305  188344 download.go:108] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/linux/amd64/v1.24.4/kubeadm
	I0122 20:52:05.638277  188344 download.go:108] Downloading: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/linux/amd64/v1.24.4/kubelet
	I0122 20:52:06.648278  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubeadm
	I0122 20:52:06.655624  188344 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubeadm': No such file or directory
	I0122 20:52:06.655665  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/linux/amd64/v1.24.4/kubeadm --> /var/lib/minikube/binaries/v1.24.4/kubeadm (44384256 bytes)
	I0122 20:52:07.120746  188344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:52:07.134603  188344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubelet
	I0122 20:52:07.138805  188344 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.24.4/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.24.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.24.4/kubelet': No such file or directory
	I0122 20:52:07.138849  188344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/linux/amd64/v1.24.4/kubelet --> /var/lib/minikube/binaries/v1.24.4/kubelet (116062680 bytes)
	I0122 20:52:07.564481  188344 out.go:201] 
	W0122 20:52:07.565914  188344 out.go:270] X Exiting due to K8S_INSTALL_FAILED: Failed to update cluster: update primary control-plane node: downloading binaries: downloading kubectl: download failed: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl.sha256 Dst:/home/jenkins/minikube-integration/20288-150966/.minikube/cache/linux/amd64/v1.24.4/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x59db560 0x59db560 0x59db560 0x59db560 0x59db560 0x59db560 0x59db560] Decompressors:map[bz2:0xc0007134d8 gz:0xc000713570 tar:0xc000713520 tar.bz2:0xc000713530 tar.gz:0xc000713540 tar.xz:0xc000713550 tar.zst:0xc000713560 tbz2:0xc000713530 tgz:0xc000713540 txz:0xc000713550 tzst:0xc000713560 xz:0xc000713578 zip:0xc000713580 zst:0xc000713590] Getters:map[file
:0xc001c38310 http:0xc0008bb180 https:0xc0008bb400] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.138.0.48:57040->151.101.193.55:443: read: connection reset by peer
	X Exiting due to K8S_INSTALL_FAILED: Failed to update cluster: update primary control-plane node: downloading binaries: downloading kubectl: download failed: https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.24.4/bin/linux/amd64/kubectl.sha256 Dst:/home/jenkins/minikube-integration/20288-150966/.minikube/cache/linux/amd64/v1.24.4/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x59db560 0x59db560 0x59db560 0x59db560 0x59db560 0x59db560 0x59db560] Decompressors:map[bz2:0xc0007134d8 gz:0xc000713570 tar:0xc000713520 tar.bz2:0xc000713530 tar.gz:0xc000713540 tar.xz:0xc000713550 tar.zst:0xc000713560 tbz2:0xc000713530 tgz:0xc000713540 txz:0xc000713550 tzst:0xc000713560 xz:0xc000713578 zip:0xc000713580 zst:0xc000713590] Getters:map[file:0xc001c38310 http:0xc0008bb180 https:0xc0
008bb400] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: read tcp 10.138.0.48:57040->151.101.193.55:443: read: connection reset by peer
	W0122 20:52:07.565942  188344 out.go:270] * 
	* 
	W0122 20:52:07.566963  188344 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0122 20:52:07.568531  188344 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-linux-amd64 start -p test-preload-159708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4 failed: exit status 100
panic.go:629: *** TestPreload FAILED at 2025-01-22 20:52:07.597046796 +0000 UTC m=+3255.243842349
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-159708 -n test-preload-159708
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-159708 -n test-preload-159708: exit status 6 (237.198225ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0122 20:52:07.818803  188685 status.go:458] kubeconfig endpoint: get endpoint: "test-preload-159708" does not appear in /home/jenkins/minikube-integration/20288-150966/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-159708" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-159708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-159708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-159708: (1.23636153s)
--- FAIL: TestPreload (46.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (1801.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-061998 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-061998 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (30m0.006174374s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-061998] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "default-k8s-diff-port-061998" primary control-plane node in "default-k8s-diff-port-061998" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 21:02:46.665778  199316 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:02:46.666191  199316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:02:46.666224  199316 out.go:358] Setting ErrFile to fd 2...
	I0122 21:02:46.666247  199316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:02:46.666822  199316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 21:02:46.667477  199316 out.go:352] Setting JSON to false
	I0122 21:02:46.668484  199316 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9902,"bootTime":1737569865,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:02:46.668611  199316 start.go:139] virtualization: kvm guest
	I0122 21:02:46.670608  199316 out.go:177] * [default-k8s-diff-port-061998] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:02:46.672019  199316 notify.go:220] Checking for updates...
	I0122 21:02:46.672038  199316 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:02:46.673241  199316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:02:46.674616  199316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 21:02:46.675897  199316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:02:46.676995  199316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:02:46.678129  199316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:02:46.679901  199316 config.go:182] Loaded profile config "embed-certs-000171": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:02:46.680052  199316 config.go:182] Loaded profile config "no-preload-086882": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:02:46.680170  199316 config.go:182] Loaded profile config "old-k8s-version-989561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0122 21:02:46.680286  199316 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:02:46.719282  199316 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 21:02:46.720510  199316 start.go:297] selected driver: kvm2
	I0122 21:02:46.720544  199316 start.go:901] validating driver "kvm2" against <nil>
	I0122 21:02:46.720564  199316 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:02:46.721412  199316 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:02:46.721539  199316 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-150966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:02:46.737330  199316 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:02:46.737378  199316 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0122 21:02:46.737615  199316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:02:46.737644  199316 cni.go:84] Creating CNI manager for ""
	I0122 21:02:46.737687  199316 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 21:02:46.737699  199316 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 21:02:46.737743  199316 start.go:340] cluster config:
	{Name:default-k8s-diff-port-061998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-061998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:02:46.737868  199316 iso.go:125] acquiring lock: {Name:mkc3bf0604e328871936621dd0e0cda10261a449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:02:46.740298  199316 out.go:177] * Starting "default-k8s-diff-port-061998" primary control-plane node in "default-k8s-diff-port-061998" cluster
	I0122 21:02:46.741484  199316 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0122 21:02:46.741524  199316 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0122 21:02:46.741537  199316 cache.go:56] Caching tarball of preloaded images
	I0122 21:02:46.741628  199316 preload.go:172] Found /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 21:02:46.741644  199316 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0122 21:02:46.741754  199316 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/config.json ...
	I0122 21:02:46.741785  199316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/config.json: {Name:mk9c9bb33dda12e88fe06c1ea232549124848f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:02:46.741939  199316 start.go:360] acquireMachinesLock for default-k8s-diff-port-061998: {Name:mkde076c0ff5ffaed1ac7d9ac4f697ecfb6e2cf2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:02:46.741997  199316 start.go:364] duration metric: took 22.461µs to acquireMachinesLock for "default-k8s-diff-port-061998"
	I0122 21:02:46.742021  199316 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-061998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:defaul
t-k8s-diff-port-061998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0122 21:02:46.742114  199316 start.go:125] createHost starting for "" (driver="kvm2")
	I0122 21:02:46.743563  199316 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0122 21:02:46.743698  199316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:02:46.743727  199316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:02:46.758091  199316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0122 21:02:46.758552  199316 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:02:46.759134  199316 main.go:141] libmachine: Using API Version  1
	I0122 21:02:46.759154  199316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:02:46.759494  199316 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:02:46.759689  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetMachineName
	I0122 21:02:46.759843  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:02:46.759978  199316 start.go:159] libmachine.API.Create for "default-k8s-diff-port-061998" (driver="kvm2")
	I0122 21:02:46.759999  199316 client.go:168] LocalClient.Create starting
	I0122 21:02:46.760035  199316 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem
	I0122 21:02:46.760074  199316 main.go:141] libmachine: Decoding PEM data...
	I0122 21:02:46.760093  199316 main.go:141] libmachine: Parsing certificate...
	I0122 21:02:46.760163  199316 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem
	I0122 21:02:46.760192  199316 main.go:141] libmachine: Decoding PEM data...
	I0122 21:02:46.760211  199316 main.go:141] libmachine: Parsing certificate...
	I0122 21:02:46.760236  199316 main.go:141] libmachine: Running pre-create checks...
	I0122 21:02:46.760250  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .PreCreateCheck
	I0122 21:02:46.760554  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetConfigRaw
	I0122 21:02:46.760917  199316 main.go:141] libmachine: Creating machine...
	I0122 21:02:46.760932  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .Create
	I0122 21:02:46.761057  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) creating KVM machine...
	I0122 21:02:46.761078  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) creating network...
	I0122 21:02:46.762461  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found existing default KVM network
	I0122 21:02:46.763642  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:46.763513  199340 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:53:b5:67} reservation:<nil>}
	I0122 21:02:46.764655  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:46.764605  199340 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000282a40}
	I0122 21:02:46.764704  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | created network xml: 
	I0122 21:02:46.764726  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | <network>
	I0122 21:02:46.764738  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG |   <name>mk-default-k8s-diff-port-061998</name>
	I0122 21:02:46.764747  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG |   <dns enable='no'/>
	I0122 21:02:46.764753  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG |   
	I0122 21:02:46.764763  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0122 21:02:46.764776  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG |     <dhcp>
	I0122 21:02:46.764784  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0122 21:02:46.764808  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG |     </dhcp>
	I0122 21:02:46.764825  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG |   </ip>
	I0122 21:02:46.764831  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG |   
	I0122 21:02:46.764840  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | </network>
	I0122 21:02:46.764926  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | 
	I0122 21:02:46.769636  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | trying to create private KVM network mk-default-k8s-diff-port-061998 192.168.50.0/24...
	I0122 21:02:46.845528  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | private KVM network mk-default-k8s-diff-port-061998 192.168.50.0/24 created
	I0122 21:02:46.845586  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) setting up store path in /home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998 ...
	I0122 21:02:46.845601  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:46.845487  199340 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:02:46.845655  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) building disk image from file:///home/jenkins/minikube-integration/20288-150966/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 21:02:46.845702  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Downloading /home/jenkins/minikube-integration/20288-150966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20288-150966/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0122 21:02:47.105027  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:47.104915  199340 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/id_rsa...
	I0122 21:02:47.524601  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:47.524477  199340 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/default-k8s-diff-port-061998.rawdisk...
	I0122 21:02:47.524633  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | Writing magic tar header
	I0122 21:02:47.524644  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | Writing SSH key tar header
	I0122 21:02:47.524656  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:47.524592  199340 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998 ...
	I0122 21:02:47.524676  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998
	I0122 21:02:47.524782  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998 (perms=drwx------)
	I0122 21:02:47.524810  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube/machines (perms=drwxr-xr-x)
	I0122 21:02:47.524825  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube/machines
	I0122 21:02:47.524841  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:02:47.524853  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966
	I0122 21:02:47.524873  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0122 21:02:47.524884  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | checking permissions on dir: /home/jenkins
	I0122 21:02:47.524923  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube (perms=drwxr-xr-x)
	I0122 21:02:47.524937  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) setting executable bit set on /home/jenkins/minikube-integration/20288-150966 (perms=drwxrwxr-x)
	I0122 21:02:47.524946  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | checking permissions on dir: /home
	I0122 21:02:47.524962  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | skipping /home - not owner
	I0122 21:02:47.524978  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0122 21:02:47.524995  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0122 21:02:47.525008  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) creating domain...
	I0122 21:02:47.525997  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) define libvirt domain using xml: 
	I0122 21:02:47.526021  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) <domain type='kvm'>
	I0122 21:02:47.526032  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   <name>default-k8s-diff-port-061998</name>
	I0122 21:02:47.526044  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   <memory unit='MiB'>2200</memory>
	I0122 21:02:47.526057  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   <vcpu>2</vcpu>
	I0122 21:02:47.526064  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   <features>
	I0122 21:02:47.526073  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <acpi/>
	I0122 21:02:47.526097  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <apic/>
	I0122 21:02:47.526109  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <pae/>
	I0122 21:02:47.526120  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     
	I0122 21:02:47.526138  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   </features>
	I0122 21:02:47.526149  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   <cpu mode='host-passthrough'>
	I0122 21:02:47.526158  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   
	I0122 21:02:47.526168  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   </cpu>
	I0122 21:02:47.526176  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   <os>
	I0122 21:02:47.526186  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <type>hvm</type>
	I0122 21:02:47.526199  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <boot dev='cdrom'/>
	I0122 21:02:47.526213  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <boot dev='hd'/>
	I0122 21:02:47.526224  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <bootmenu enable='no'/>
	I0122 21:02:47.526232  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   </os>
	I0122 21:02:47.526247  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   <devices>
	I0122 21:02:47.526281  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <disk type='file' device='cdrom'>
	I0122 21:02:47.526314  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <source file='/home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/boot2docker.iso'/>
	I0122 21:02:47.526329  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <target dev='hdc' bus='scsi'/>
	I0122 21:02:47.526341  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <readonly/>
	I0122 21:02:47.526353  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     </disk>
	I0122 21:02:47.526364  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <disk type='file' device='disk'>
	I0122 21:02:47.526389  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0122 21:02:47.526414  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <source file='/home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/default-k8s-diff-port-061998.rawdisk'/>
	I0122 21:02:47.526431  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <target dev='hda' bus='virtio'/>
	I0122 21:02:47.526442  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     </disk>
	I0122 21:02:47.526452  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <interface type='network'>
	I0122 21:02:47.526458  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <source network='mk-default-k8s-diff-port-061998'/>
	I0122 21:02:47.526464  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <model type='virtio'/>
	I0122 21:02:47.526471  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     </interface>
	I0122 21:02:47.526484  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <interface type='network'>
	I0122 21:02:47.526500  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <source network='default'/>
	I0122 21:02:47.526510  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <model type='virtio'/>
	I0122 21:02:47.526517  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     </interface>
	I0122 21:02:47.526526  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <serial type='pty'>
	I0122 21:02:47.526537  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <target port='0'/>
	I0122 21:02:47.526546  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     </serial>
	I0122 21:02:47.526556  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <console type='pty'>
	I0122 21:02:47.526570  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <target type='serial' port='0'/>
	I0122 21:02:47.526586  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     </console>
	I0122 21:02:47.526599  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     <rng model='virtio'>
	I0122 21:02:47.526611  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)       <backend model='random'>/dev/random</backend>
	I0122 21:02:47.526622  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     </rng>
	I0122 21:02:47.526629  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     
	I0122 21:02:47.526640  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)     
	I0122 21:02:47.526649  199316 main.go:141] libmachine: (default-k8s-diff-port-061998)   </devices>
	I0122 21:02:47.526658  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) </domain>
	I0122 21:02:47.526672  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) 
	I0122 21:02:47.530523  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:0e:d0:aa in network default
	I0122 21:02:47.531072  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) starting domain...
	I0122 21:02:47.531093  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:47.531111  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) ensuring networks are active...
	I0122 21:02:47.531790  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Ensuring network default is active
	I0122 21:02:47.532107  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Ensuring network mk-default-k8s-diff-port-061998 is active
	I0122 21:02:47.532676  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) getting domain XML...
	I0122 21:02:47.533370  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) creating domain...
	I0122 21:02:48.754490  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) waiting for IP...
	I0122 21:02:48.755341  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:48.755853  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:48.755892  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:48.755841  199340 retry.go:31] will retry after 261.664784ms: waiting for domain to come up
	I0122 21:02:49.019312  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:49.019914  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:49.019938  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:49.019892  199340 retry.go:31] will retry after 310.074615ms: waiting for domain to come up
	I0122 21:02:49.331129  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:49.331728  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:49.331761  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:49.331675  199340 retry.go:31] will retry after 441.629863ms: waiting for domain to come up
	I0122 21:02:49.775395  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:49.775878  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:49.775902  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:49.775850  199340 retry.go:31] will retry after 408.34433ms: waiting for domain to come up
	I0122 21:02:50.185378  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:50.186086  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:50.186123  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:50.186055  199340 retry.go:31] will retry after 519.283717ms: waiting for domain to come up
	I0122 21:02:50.706886  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:50.707403  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:50.707446  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:50.707371  199340 retry.go:31] will retry after 926.749195ms: waiting for domain to come up
	I0122 21:02:51.636013  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:51.636428  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:51.636451  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:51.636391  199340 retry.go:31] will retry after 846.856062ms: waiting for domain to come up
	I0122 21:02:52.484588  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:52.485082  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:52.485103  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:52.485055  199340 retry.go:31] will retry after 1.485537177s: waiting for domain to come up
	I0122 21:02:53.972780  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:53.973259  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:53.973290  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:53.973228  199340 retry.go:31] will retry after 1.398887285s: waiting for domain to come up
	I0122 21:02:55.373718  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:55.374207  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:55.374240  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:55.374150  199340 retry.go:31] will retry after 2.312376713s: waiting for domain to come up
	I0122 21:02:57.688240  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:57.688811  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:57.688845  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:57.688787  199340 retry.go:31] will retry after 2.07039916s: waiting for domain to come up
	I0122 21:02:59.762026  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:02:59.762548  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:02:59.762573  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:02:59.762529  199340 retry.go:31] will retry after 3.531622052s: waiting for domain to come up
	I0122 21:03:03.295604  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:03.296147  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find current IP address of domain default-k8s-diff-port-061998 in network mk-default-k8s-diff-port-061998
	I0122 21:03:03.296171  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | I0122 21:03:03.296112  199340 retry.go:31] will retry after 4.276964041s: waiting for domain to come up
	I0122 21:03:07.574626  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.575177  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) found domain IP: 192.168.50.147
	I0122 21:03:07.575230  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has current primary IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.575248  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) reserving static IP address...
	I0122 21:03:07.575560  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | unable to find host DHCP lease matching {name: "default-k8s-diff-port-061998", mac: "52:54:00:a1:a5:8f", ip: "192.168.50.147"} in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.651080  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | Getting to WaitForSSH function...
	I0122 21:03:07.651124  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) reserved static IP address 192.168.50.147 for domain default-k8s-diff-port-061998
	I0122 21:03:07.651140  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) waiting for SSH...
	I0122 21:03:07.653617  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.654079  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:07.654110  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.654625  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | Using SSH client type: external
	I0122 21:03:07.654644  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/id_rsa (-rw-------)
	I0122 21:03:07.654667  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:03:07.654681  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | About to run SSH command:
	I0122 21:03:07.654691  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | exit 0
	I0122 21:03:07.778006  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | SSH cmd err, output: <nil>: 
	I0122 21:03:07.778307  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) KVM machine creation complete
	I0122 21:03:07.778713  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetConfigRaw
	I0122 21:03:07.779270  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:03:07.779474  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:03:07.779639  199316 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0122 21:03:07.779654  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetState
	I0122 21:03:07.781096  199316 main.go:141] libmachine: Detecting operating system of created instance...
	I0122 21:03:07.781124  199316 main.go:141] libmachine: Waiting for SSH to be available...
	I0122 21:03:07.781130  199316 main.go:141] libmachine: Getting to WaitForSSH function...
	I0122 21:03:07.781136  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:07.783984  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.784320  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:07.784362  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.784475  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:07.784646  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:07.784805  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:07.784941  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:07.785099  199316 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:07.785292  199316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0122 21:03:07.785305  199316 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0122 21:03:07.885223  199316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:03:07.885246  199316 main.go:141] libmachine: Detecting the provisioner...
	I0122 21:03:07.885255  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:07.888138  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.888470  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:07.888495  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.888709  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:07.888903  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:07.889069  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:07.889185  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:07.889343  199316 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:07.889523  199316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0122 21:03:07.889533  199316 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0122 21:03:07.990565  199316 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0122 21:03:07.990662  199316 main.go:141] libmachine: found compatible host: buildroot
	I0122 21:03:07.990676  199316 main.go:141] libmachine: Provisioning with buildroot...
	I0122 21:03:07.990691  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetMachineName
	I0122 21:03:07.990945  199316 buildroot.go:166] provisioning hostname "default-k8s-diff-port-061998"
	I0122 21:03:07.990971  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetMachineName
	I0122 21:03:07.991163  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:07.993931  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.994429  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:07.994462  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:07.994645  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:07.994853  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:07.995018  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:07.995159  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:07.995336  199316 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:07.995508  199316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0122 21:03:07.995520  199316 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-061998 && echo "default-k8s-diff-port-061998" | sudo tee /etc/hostname
	I0122 21:03:08.110491  199316 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-061998
	
	I0122 21:03:08.110523  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:08.113443  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.113818  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.113854  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.114165  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:08.114371  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:08.114531  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:08.114639  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:08.114789  199316 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:08.115026  199316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0122 21:03:08.115052  199316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-061998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-061998/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-061998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:03:08.222619  199316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:03:08.222652  199316 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-150966/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-150966/.minikube}
	I0122 21:03:08.222672  199316 buildroot.go:174] setting up certificates
	I0122 21:03:08.222683  199316 provision.go:84] configureAuth start
	I0122 21:03:08.222695  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetMachineName
	I0122 21:03:08.222973  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetIP
	I0122 21:03:08.226013  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.226472  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.226498  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.226755  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:08.228991  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.229308  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.229339  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.229442  199316 provision.go:143] copyHostCerts
	I0122 21:03:08.229507  199316 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem, removing ...
	I0122 21:03:08.229530  199316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem
	I0122 21:03:08.229614  199316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem (1078 bytes)
	I0122 21:03:08.229743  199316 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem, removing ...
	I0122 21:03:08.229755  199316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem
	I0122 21:03:08.229799  199316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem (1123 bytes)
	I0122 21:03:08.229877  199316 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem, removing ...
	I0122 21:03:08.229888  199316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem
	I0122 21:03:08.229921  199316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem (1675 bytes)
	I0122 21:03:08.230015  199316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-061998 san=[127.0.0.1 192.168.50.147 default-k8s-diff-port-061998 localhost minikube]
	I0122 21:03:08.325033  199316 provision.go:177] copyRemoteCerts
	I0122 21:03:08.325099  199316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:03:08.325130  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:08.328215  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.328673  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.328704  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.328887  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:08.329120  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:08.329329  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:08.329519  199316 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/id_rsa Username:docker}
	I0122 21:03:08.407877  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 21:03:08.430897  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0122 21:03:08.453151  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0122 21:03:08.475673  199316 provision.go:87] duration metric: took 252.977183ms to configureAuth
	I0122 21:03:08.475708  199316 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:03:08.475909  199316 config.go:182] Loaded profile config "default-k8s-diff-port-061998": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:03:08.475937  199316 main.go:141] libmachine: Checking connection to Docker...
	I0122 21:03:08.475955  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetURL
	I0122 21:03:08.477264  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | using libvirt version 6000000
	I0122 21:03:08.479607  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.479939  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.479981  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.480217  199316 main.go:141] libmachine: Docker is up and running!
	I0122 21:03:08.480256  199316 main.go:141] libmachine: Reticulating splines...
	I0122 21:03:08.480271  199316 client.go:171] duration metric: took 21.720262599s to LocalClient.Create
	I0122 21:03:08.480311  199316 start.go:167] duration metric: took 21.720332673s to libmachine.API.Create "default-k8s-diff-port-061998"
	I0122 21:03:08.480326  199316 start.go:293] postStartSetup for "default-k8s-diff-port-061998" (driver="kvm2")
	I0122 21:03:08.480344  199316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:03:08.480370  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:03:08.480634  199316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:03:08.480661  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:08.482763  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.483150  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.483180  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.483326  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:08.483520  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:08.483713  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:08.483852  199316 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/id_rsa Username:docker}
	I0122 21:03:08.564108  199316 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:03:08.568011  199316 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:03:08.568035  199316 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/addons for local assets ...
	I0122 21:03:08.568103  199316 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/files for local assets ...
	I0122 21:03:08.568234  199316 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem -> 1582712.pem in /etc/ssl/certs
	I0122 21:03:08.568342  199316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:03:08.577029  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem --> /etc/ssl/certs/1582712.pem (1708 bytes)
	I0122 21:03:08.599484  199316 start.go:296] duration metric: took 119.139237ms for postStartSetup
	I0122 21:03:08.599532  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetConfigRaw
	I0122 21:03:08.600212  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetIP
	I0122 21:03:08.603103  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.603519  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.603550  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.603827  199316 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/config.json ...
	I0122 21:03:08.604053  199316 start.go:128] duration metric: took 21.861924596s to createHost
	I0122 21:03:08.604083  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:08.606229  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.606549  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.606573  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.606684  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:08.606868  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:08.607032  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:08.607211  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:08.607367  199316 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:08.607529  199316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.147 22 <nil> <nil>}
	I0122 21:03:08.607539  199316 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:03:08.706321  199316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737579788.679109004
	
	I0122 21:03:08.706346  199316 fix.go:216] guest clock: 1737579788.679109004
	I0122 21:03:08.706356  199316 fix.go:229] Guest: 2025-01-22 21:03:08.679109004 +0000 UTC Remote: 2025-01-22 21:03:08.604068549 +0000 UTC m=+21.977738838 (delta=75.040455ms)
	I0122 21:03:08.706397  199316 fix.go:200] guest clock delta is within tolerance: 75.040455ms
	I0122 21:03:08.706405  199316 start.go:83] releasing machines lock for "default-k8s-diff-port-061998", held for 21.964394874s
	I0122 21:03:08.706426  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:03:08.706734  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetIP
	I0122 21:03:08.709282  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.709650  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.709675  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.709809  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:03:08.710282  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:03:08.710458  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:03:08.710540  199316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:03:08.710594  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:08.710647  199316 ssh_runner.go:195] Run: cat /version.json
	I0122 21:03:08.710672  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:08.713151  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.713436  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.713466  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.713585  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.713603  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:08.713769  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:08.713933  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:08.713971  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:08.713979  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:08.714150  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:08.714148  199316 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/id_rsa Username:docker}
	I0122 21:03:08.714294  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:08.714467  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:08.714635  199316 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/id_rsa Username:docker}
	I0122 21:03:08.786606  199316 ssh_runner.go:195] Run: systemctl --version
	I0122 21:03:08.809836  199316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:03:08.815219  199316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:03:08.815306  199316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:03:08.830381  199316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:03:08.830409  199316 start.go:495] detecting cgroup driver to use...
	I0122 21:03:08.830464  199316 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 21:03:08.866449  199316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 21:03:08.880920  199316 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:03:08.880975  199316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:03:08.894025  199316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:03:08.906766  199316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:03:09.018166  199316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:03:09.155836  199316 docker.go:233] disabling docker service ...
	I0122 21:03:09.155931  199316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:03:09.169325  199316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:03:09.182147  199316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:03:09.316807  199316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:03:09.434311  199316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:03:09.447057  199316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:03:09.463844  199316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0122 21:03:09.473811  199316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 21:03:09.484463  199316 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 21:03:09.484522  199316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 21:03:09.494378  199316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 21:03:09.504138  199316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 21:03:09.514594  199316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 21:03:09.525892  199316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:03:09.537248  199316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 21:03:09.548754  199316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0122 21:03:09.558523  199316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0122 21:03:09.568630  199316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:03:09.577465  199316 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:03:09.577511  199316 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:03:09.588837  199316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:03:09.597824  199316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:03:09.715710  199316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 21:03:09.744520  199316 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0122 21:03:09.744597  199316 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 21:03:09.748932  199316 retry.go:31] will retry after 1.023240186s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0122 21:03:10.773213  199316 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 21:03:10.778374  199316 start.go:563] Will wait 60s for crictl version
	I0122 21:03:10.778440  199316 ssh_runner.go:195] Run: which crictl
	I0122 21:03:10.782160  199316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:03:10.820891  199316 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0122 21:03:10.820971  199316 ssh_runner.go:195] Run: containerd --version
	I0122 21:03:10.849337  199316 ssh_runner.go:195] Run: containerd --version
	I0122 21:03:10.876867  199316 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0122 21:03:10.878017  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetIP
	I0122 21:03:10.880665  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:10.881089  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:10.881126  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:10.881316  199316 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0122 21:03:10.885191  199316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:03:10.897152  199316 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-061998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-061
998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.147 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:03:10.897251  199316 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0122 21:03:10.897297  199316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:03:10.927006  199316 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:03:10.927067  199316 ssh_runner.go:195] Run: which lz4
	I0122 21:03:10.930988  199316 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:03:10.934956  199316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:03:10.934981  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398131433 bytes)
	I0122 21:03:12.135119  199316 containerd.go:563] duration metric: took 1.204157072s to copy over tarball
	I0122 21:03:12.135199  199316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:03:14.122405  199316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.987159305s)
	I0122 21:03:14.122439  199316 containerd.go:570] duration metric: took 1.987292872s to extract the tarball
	I0122 21:03:14.122446  199316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:03:14.158901  199316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:03:14.267432  199316 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 21:03:14.295358  199316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:03:14.343189  199316 retry.go:31] will retry after 325.354113ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-22T21:03:14Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0122 21:03:14.668708  199316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:03:14.706977  199316 containerd.go:627] all images are preloaded for containerd runtime.
	I0122 21:03:14.707002  199316 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:03:14.707011  199316 kubeadm.go:934] updating node { 192.168.50.147 8444 v1.32.1 containerd true true} ...
	I0122 21:03:14.707117  199316 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-061998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-061998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:03:14.707178  199316 ssh_runner.go:195] Run: sudo crictl info
	I0122 21:03:14.743045  199316 cni.go:84] Creating CNI manager for ""
	I0122 21:03:14.743070  199316 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 21:03:14.743079  199316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:03:14.743102  199316 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.147 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-061998 NodeName:default-k8s-diff-port-061998 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:03:14.743209  199316 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.147
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-061998"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.147"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.147"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:03:14.743274  199316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:03:14.753195  199316 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:03:14.753267  199316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:03:14.762469  199316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I0122 21:03:14.778224  199316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:03:14.793652  199316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2324 bytes)
	I0122 21:03:14.809778  199316 ssh_runner.go:195] Run: grep 192.168.50.147	control-plane.minikube.internal$ /etc/hosts
	I0122 21:03:14.813303  199316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:03:14.824820  199316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:03:14.932263  199316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:03:14.950074  199316 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998 for IP: 192.168.50.147
	I0122 21:03:14.950102  199316 certs.go:194] generating shared ca certs ...
	I0122 21:03:14.950119  199316 certs.go:226] acquiring lock for ca certs: {Name:mk53e9e3df6ffb3fa8285a86887df441ff5826d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:14.950264  199316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.key
	I0122 21:03:14.950310  199316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.key
	I0122 21:03:14.950320  199316 certs.go:256] generating profile certs ...
	I0122 21:03:14.950395  199316 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/client.key
	I0122 21:03:14.950407  199316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/client.crt with IP's: []
	I0122 21:03:15.103400  199316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/client.crt ...
	I0122 21:03:15.103432  199316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/client.crt: {Name:mkade76cb7ce1eb9244c882af61187405d114dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:15.103589  199316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/client.key ...
	I0122 21:03:15.103606  199316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/client.key: {Name:mk11a08ad45a3248e869441d02cd786808114b93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:15.103688  199316 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.key.f77c88fd
	I0122 21:03:15.103704  199316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.crt.f77c88fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.147]
	I0122 21:03:15.381302  199316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.crt.f77c88fd ...
	I0122 21:03:15.381335  199316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.crt.f77c88fd: {Name:mk1facb348e68483603f0c6b550c5df109ef596c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:15.381502  199316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.key.f77c88fd ...
	I0122 21:03:15.381517  199316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.key.f77c88fd: {Name:mk81e1b1aceb338c749684f868c5488a03c4d26e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:15.381588  199316 certs.go:381] copying /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.crt.f77c88fd -> /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.crt
	I0122 21:03:15.381654  199316 certs.go:385] copying /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.key.f77c88fd -> /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.key
	I0122 21:03:15.381706  199316 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/proxy-client.key
	I0122 21:03:15.381721  199316 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/proxy-client.crt with IP's: []
	I0122 21:03:15.507014  199316 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/proxy-client.crt ...
	I0122 21:03:15.507047  199316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/proxy-client.crt: {Name:mkb73700379bea8a0bb5b83f82276e7a4136a267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:15.507211  199316 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/proxy-client.key ...
	I0122 21:03:15.507223  199316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/proxy-client.key: {Name:mk47631043ef7542a695149629e2806beef76828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:15.507407  199316 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271.pem (1338 bytes)
	W0122 21:03:15.507443  199316 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271_empty.pem, impossibly tiny 0 bytes
	I0122 21:03:15.507454  199316 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:03:15.507474  199316 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem (1078 bytes)
	I0122 21:03:15.507498  199316 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:03:15.507520  199316 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem (1675 bytes)
	I0122 21:03:15.507558  199316 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem (1708 bytes)
	I0122 21:03:15.508096  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:03:15.532917  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0122 21:03:15.555667  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:03:15.580776  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0122 21:03:15.603607  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0122 21:03:15.625834  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:03:15.647944  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:03:15.671026  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/default-k8s-diff-port-061998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:03:15.696327  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271.pem --> /usr/share/ca-certificates/158271.pem (1338 bytes)
	I0122 21:03:15.730403  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem --> /usr/share/ca-certificates/1582712.pem (1708 bytes)
	I0122 21:03:15.756453  199316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:03:15.784134  199316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:03:15.799896  199316 ssh_runner.go:195] Run: openssl version
	I0122 21:03:15.805501  199316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:03:15.816049  199316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:03:15.820185  199316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:03:15.820249  199316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:03:15.825621  199316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:03:15.836403  199316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/158271.pem && ln -fs /usr/share/ca-certificates/158271.pem /etc/ssl/certs/158271.pem"
	I0122 21:03:15.846708  199316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/158271.pem
	I0122 21:03:15.850897  199316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:06 /usr/share/ca-certificates/158271.pem
	I0122 21:03:15.850969  199316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/158271.pem
	I0122 21:03:15.857696  199316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/158271.pem /etc/ssl/certs/51391683.0"
	I0122 21:03:15.868161  199316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582712.pem && ln -fs /usr/share/ca-certificates/1582712.pem /etc/ssl/certs/1582712.pem"
	I0122 21:03:15.878815  199316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582712.pem
	I0122 21:03:15.882984  199316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:06 /usr/share/ca-certificates/1582712.pem
	I0122 21:03:15.883029  199316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582712.pem
	I0122 21:03:15.888310  199316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1582712.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:03:15.898818  199316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:03:15.902635  199316 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0122 21:03:15.902692  199316 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-061998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-061998
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.147 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:03:15.902786  199316 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0122 21:03:15.902826  199316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:03:15.937339  199316 cri.go:89] found id: ""
	I0122 21:03:15.937411  199316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:03:15.947629  199316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:03:15.957632  199316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:03:15.968345  199316 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:03:15.968364  199316 kubeadm.go:157] found existing configuration files:
	
	I0122 21:03:15.968407  199316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0122 21:03:15.978393  199316 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:03:15.978448  199316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:03:15.989229  199316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0122 21:03:15.997913  199316 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:03:15.997976  199316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:03:16.007383  199316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0122 21:03:16.017966  199316 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:03:16.018032  199316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:03:16.028977  199316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0122 21:03:16.039343  199316 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:03:16.039408  199316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:03:16.048570  199316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:03:16.249910  199316 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:03:25.832091  199316 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0122 21:03:25.832165  199316 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:03:25.832284  199316 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:03:25.832408  199316 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:03:25.832556  199316 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0122 21:03:25.832663  199316 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:03:25.834433  199316 out.go:235]   - Generating certificates and keys ...
	I0122 21:03:25.834523  199316 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:03:25.834614  199316 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:03:25.834703  199316 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 21:03:25.834762  199316 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0122 21:03:25.834821  199316 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0122 21:03:25.834870  199316 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0122 21:03:25.834923  199316 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0122 21:03:25.835061  199316 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-061998 localhost] and IPs [192.168.50.147 127.0.0.1 ::1]
	I0122 21:03:25.835127  199316 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0122 21:03:25.835281  199316 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-061998 localhost] and IPs [192.168.50.147 127.0.0.1 ::1]
	I0122 21:03:25.835408  199316 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 21:03:25.835528  199316 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 21:03:25.835607  199316 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0122 21:03:25.835659  199316 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:03:25.835729  199316 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:03:25.835827  199316 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0122 21:03:25.835886  199316 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:03:25.835985  199316 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:03:25.836064  199316 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:03:25.836177  199316 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:03:25.836273  199316 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:03:25.837709  199316 out.go:235]   - Booting up control plane ...
	I0122 21:03:25.837806  199316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:03:25.837891  199316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:03:25.837950  199316 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:03:25.838102  199316 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:03:25.838212  199316 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:03:25.838287  199316 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:03:25.838459  199316 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0122 21:03:25.838564  199316 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0122 21:03:25.838615  199316 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.359511ms
	I0122 21:03:25.838678  199316 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0122 21:03:25.838734  199316 kubeadm.go:310] [api-check] The API server is healthy after 4.501550425s
	I0122 21:03:25.838847  199316 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 21:03:25.839019  199316 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 21:03:25.839099  199316 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0122 21:03:25.839303  199316 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-061998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 21:03:25.839390  199316 kubeadm.go:310] [bootstrap-token] Using token: 6bcwn5.2swivlm7psu8wmby
	I0122 21:03:25.840800  199316 out.go:235]   - Configuring RBAC rules ...
	I0122 21:03:25.840886  199316 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 21:03:25.840976  199316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 21:03:25.841161  199316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 21:03:25.841353  199316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 21:03:25.841449  199316 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 21:03:25.841516  199316 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 21:03:25.841650  199316 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 21:03:25.841694  199316 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0122 21:03:25.841741  199316 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0122 21:03:25.841750  199316 kubeadm.go:310] 
	I0122 21:03:25.841813  199316 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0122 21:03:25.841820  199316 kubeadm.go:310] 
	I0122 21:03:25.841918  199316 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0122 21:03:25.841927  199316 kubeadm.go:310] 
	I0122 21:03:25.841981  199316 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0122 21:03:25.842072  199316 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 21:03:25.842149  199316 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 21:03:25.842156  199316 kubeadm.go:310] 
	I0122 21:03:25.842236  199316 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0122 21:03:25.842242  199316 kubeadm.go:310] 
	I0122 21:03:25.842306  199316 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 21:03:25.842315  199316 kubeadm.go:310] 
	I0122 21:03:25.842386  199316 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0122 21:03:25.842489  199316 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 21:03:25.842585  199316 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 21:03:25.842595  199316 kubeadm.go:310] 
	I0122 21:03:25.842712  199316 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0122 21:03:25.842809  199316 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0122 21:03:25.842824  199316 kubeadm.go:310] 
	I0122 21:03:25.842957  199316 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6bcwn5.2swivlm7psu8wmby \
	I0122 21:03:25.843083  199316 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88af174ababab22d3fd32c76a81e6e1b2f6ebf2a7a258215c191241a8730421a \
	I0122 21:03:25.843114  199316 kubeadm.go:310] 	--control-plane 
	I0122 21:03:25.843127  199316 kubeadm.go:310] 
	I0122 21:03:25.843256  199316 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0122 21:03:25.843266  199316 kubeadm.go:310] 
	I0122 21:03:25.843334  199316 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6bcwn5.2swivlm7psu8wmby \
	I0122 21:03:25.843470  199316 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88af174ababab22d3fd32c76a81e6e1b2f6ebf2a7a258215c191241a8730421a 
	I0122 21:03:25.843483  199316 cni.go:84] Creating CNI manager for ""
	I0122 21:03:25.843489  199316 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 21:03:25.844835  199316 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:03:25.846120  199316 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:03:25.856251  199316 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:03:25.874771  199316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:03:25.874887  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:03:25.874897  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-061998 minikube.k8s.io/updated_at=2025_01_22T21_03_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4 minikube.k8s.io/name=default-k8s-diff-port-061998 minikube.k8s.io/primary=true
	I0122 21:03:25.887763  199316 ops.go:34] apiserver oom_adj: -16
	I0122 21:03:26.118454  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:03:26.618853  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:03:27.118761  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:03:27.618836  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:03:28.118701  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:03:28.619221  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:03:29.119227  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:03:29.619358  199316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:03:29.696627  199316 kubeadm.go:1113] duration metric: took 3.821802197s to wait for elevateKubeSystemPrivileges
	I0122 21:03:29.696666  199316 kubeadm.go:394] duration metric: took 13.79397809s to StartCluster
	I0122 21:03:29.696689  199316 settings.go:142] acquiring lock: {Name:mkfbfc304d1e9b2b80529e33af6a426e89d118a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:29.696773  199316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 21:03:29.699187  199316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/kubeconfig: {Name:mk70478f45a79a3b41e7b46029f97939b1511ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:29.699455  199316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0122 21:03:29.699475  199316 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:03:29.699452  199316 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.147 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0122 21:03:29.699540  199316 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-061998"
	I0122 21:03:29.699554  199316 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-061998"
	I0122 21:03:29.699580  199316 host.go:66] Checking if "default-k8s-diff-port-061998" exists ...
	I0122 21:03:29.699589  199316 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-061998"
	I0122 21:03:29.699680  199316 config.go:182] Loaded profile config "default-k8s-diff-port-061998": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:03:29.699703  199316 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-061998"
	I0122 21:03:29.700111  199316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:03:29.700132  199316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:03:29.700156  199316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:29.700166  199316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:29.701981  199316 out.go:177] * Verifying Kubernetes components...
	I0122 21:03:29.703123  199316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:03:29.715277  199316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0122 21:03:29.715736  199316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0122 21:03:29.715781  199316 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:29.716171  199316 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:29.716396  199316 main.go:141] libmachine: Using API Version  1
	I0122 21:03:29.716423  199316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:29.716666  199316 main.go:141] libmachine: Using API Version  1
	I0122 21:03:29.716692  199316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:29.716790  199316 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:29.716962  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetState
	I0122 21:03:29.717038  199316 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:29.717571  199316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:03:29.717626  199316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:29.720312  199316 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-061998"
	I0122 21:03:29.720354  199316 host.go:66] Checking if "default-k8s-diff-port-061998" exists ...
	I0122 21:03:29.720615  199316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:03:29.720678  199316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:29.733097  199316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0122 21:03:29.733547  199316 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:29.734091  199316 main.go:141] libmachine: Using API Version  1
	I0122 21:03:29.734122  199316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:29.734502  199316 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:29.734731  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetState
	I0122 21:03:29.735820  199316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33293
	I0122 21:03:29.736249  199316 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:29.736733  199316 main.go:141] libmachine: Using API Version  1
	I0122 21:03:29.736756  199316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:29.736777  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:03:29.737056  199316 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:29.737538  199316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:03:29.737575  199316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:29.738843  199316 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:03:29.740108  199316 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:03:29.740129  199316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:03:29.740148  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:29.743500  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:29.743980  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:29.744011  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:29.744203  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:29.744389  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:29.744553  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:29.744704  199316 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/id_rsa Username:docker}
	I0122 21:03:29.753739  199316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0122 21:03:29.754246  199316 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:29.754767  199316 main.go:141] libmachine: Using API Version  1
	I0122 21:03:29.754794  199316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:29.755116  199316 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:29.755303  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetState
	I0122 21:03:29.756960  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .DriverName
	I0122 21:03:29.757187  199316 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:03:29.757202  199316 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:03:29.757215  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHHostname
	I0122 21:03:29.760017  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:29.760412  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:a5:8f", ip: ""} in network mk-default-k8s-diff-port-061998: {Iface:virbr2 ExpiryTime:2025-01-22 22:03:01 +0000 UTC Type:0 Mac:52:54:00:a1:a5:8f Iaid: IPaddr:192.168.50.147 Prefix:24 Hostname:default-k8s-diff-port-061998 Clientid:01:52:54:00:a1:a5:8f}
	I0122 21:03:29.760446  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | domain default-k8s-diff-port-061998 has defined IP address 192.168.50.147 and MAC address 52:54:00:a1:a5:8f in network mk-default-k8s-diff-port-061998
	I0122 21:03:29.760574  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHPort
	I0122 21:03:29.760784  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHKeyPath
	I0122 21:03:29.760932  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .GetSSHUsername
	I0122 21:03:29.761092  199316 sshutil.go:53] new ssh client: &{IP:192.168.50.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/default-k8s-diff-port-061998/id_rsa Username:docker}
	I0122 21:03:29.888851  199316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0122 21:03:29.899144  199316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:03:30.035966  199316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:03:30.115027  199316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:03:30.364741  199316 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0122 21:03:30.367243  199316 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-061998" to be "Ready" ...
	I0122 21:03:30.390318  199316 node_ready.go:49] node "default-k8s-diff-port-061998" has status "Ready":"True"
	I0122 21:03:30.390343  199316 node_ready.go:38] duration metric: took 23.069396ms for node "default-k8s-diff-port-061998" to be "Ready" ...
	I0122 21:03:30.390352  199316 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	W0122 21:03:30.399430  199316 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-061998" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0122 21:03:30.399452  199316 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0122 21:03:30.405454  199316 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace to be "Ready" ...
	I0122 21:03:30.689502  199316 main.go:141] libmachine: Making call to close driver server
	I0122 21:03:30.689539  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .Close
	I0122 21:03:30.689583  199316 main.go:141] libmachine: Making call to close driver server
	I0122 21:03:30.689613  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .Close
	I0122 21:03:30.689878  199316 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:03:30.689897  199316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:03:30.689969  199316 main.go:141] libmachine: Making call to close driver server
	I0122 21:03:30.689991  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .Close
	I0122 21:03:30.689992  199316 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:03:30.690005  199316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:03:30.690013  199316 main.go:141] libmachine: Making call to close driver server
	I0122 21:03:30.690021  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .Close
	I0122 21:03:30.690232  199316 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:03:30.690253  199316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:03:30.690323  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | Closing plugin on server side
	I0122 21:03:30.690442  199316 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:03:30.690456  199316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:03:30.690906  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | Closing plugin on server side
	I0122 21:03:30.732411  199316 main.go:141] libmachine: Making call to close driver server
	I0122 21:03:30.732438  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) Calling .Close
	I0122 21:03:30.732783  199316 main.go:141] libmachine: (default-k8s-diff-port-061998) DBG | Closing plugin on server side
	I0122 21:03:30.732813  199316 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:03:30.732828  199316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:03:30.734432  199316 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0122 21:03:30.735538  199316 addons.go:514] duration metric: took 1.036064025s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0122 21:03:32.410429  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:34.411818  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:36.412531  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:38.913044  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:41.412230  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:43.911749  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:45.912578  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:48.411072  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:50.412150  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:52.911901  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:54.912030  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:56.913130  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:03:59.412464  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:01.912471  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:03.919381  199316 pod_ready.go:103] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:04.913285  199316 pod_ready.go:93] pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:04.913312  199316 pod_ready.go:82] duration metric: took 34.50783434s for pod "coredns-668d6bf9bc-7g77x" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:04.913326  199316 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-j6pzl" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:04.918711  199316 pod_ready.go:93] pod "coredns-668d6bf9bc-j6pzl" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:04.918738  199316 pod_ready.go:82] duration metric: took 5.404021ms for pod "coredns-668d6bf9bc-j6pzl" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:04.918750  199316 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-061998" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:04.923999  199316 pod_ready.go:93] pod "etcd-default-k8s-diff-port-061998" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:04.924021  199316 pod_ready.go:82] duration metric: took 5.263016ms for pod "etcd-default-k8s-diff-port-061998" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:04.924031  199316 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-061998" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:04.929690  199316 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-061998" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:04.929715  199316 pod_ready.go:82] duration metric: took 5.676995ms for pod "kube-apiserver-default-k8s-diff-port-061998" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:04.929729  199316 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-061998" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:04.934193  199316 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-061998" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:04.934217  199316 pod_ready.go:82] duration metric: took 4.478622ms for pod "kube-controller-manager-default-k8s-diff-port-061998" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:04.934230  199316 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c68rw" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:05.310033  199316 pod_ready.go:93] pod "kube-proxy-c68rw" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:05.310061  199316 pod_ready.go:82] duration metric: took 375.822451ms for pod "kube-proxy-c68rw" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:05.310072  199316 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-061998" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:05.709949  199316 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-061998" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:05.709998  199316 pod_ready.go:82] duration metric: took 399.916628ms for pod "kube-scheduler-default-k8s-diff-port-061998" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:05.710009  199316 pod_ready.go:39] duration metric: took 35.319646433s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:04:05.710028  199316 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:04:05.710086  199316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:04:05.727734  199316 api_server.go:72] duration metric: took 36.028168415s to wait for apiserver process to appear ...
	I0122 21:04:05.727767  199316 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:04:05.727794  199316 api_server.go:253] Checking apiserver healthz at https://192.168.50.147:8444/healthz ...
	I0122 21:04:05.736528  199316 api_server.go:279] https://192.168.50.147:8444/healthz returned 200:
	ok
	I0122 21:04:05.739573  199316 api_server.go:141] control plane version: v1.32.1
	I0122 21:04:05.739611  199316 api_server.go:131] duration metric: took 11.835739ms to wait for apiserver health ...
	I0122 21:04:05.739623  199316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:04:05.912891  199316 system_pods.go:59] 8 kube-system pods found
	I0122 21:04:05.912919  199316 system_pods.go:61] "coredns-668d6bf9bc-7g77x" [e2ba4dbd-2805-4c6f-847b-65fd77ed65bc] Running
	I0122 21:04:05.912925  199316 system_pods.go:61] "coredns-668d6bf9bc-j6pzl" [a14b2925-77d9-4528-94d8-be1c5e3a0874] Running
	I0122 21:04:05.912929  199316 system_pods.go:61] "etcd-default-k8s-diff-port-061998" [aef1f9a8-e37a-48f0-b9a6-6dd707026229] Running
	I0122 21:04:05.912933  199316 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-061998" [9dd61e3d-f0af-4f4e-b17d-60c2f2e4c361] Running
	I0122 21:04:05.912937  199316 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-061998" [95539222-4c17-4d6e-8b77-f252deee4e8b] Running
	I0122 21:04:05.912940  199316 system_pods.go:61] "kube-proxy-c68rw" [e51e2efb-1d9c-4101-b7de-9e35d56d97a2] Running
	I0122 21:04:05.912943  199316 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-061998" [06c04841-2d6e-4571-8e9e-a0426b224a25] Running
	I0122 21:04:05.912946  199316 system_pods.go:61] "storage-provisioner" [d3fa8ed4-5942-4f60-8d8f-7e91a791f3bf] Running
	I0122 21:04:05.912952  199316 system_pods.go:74] duration metric: took 173.323206ms to wait for pod list to return data ...
	I0122 21:04:05.912959  199316 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:04:06.109918  199316 default_sa.go:45] found service account: "default"
	I0122 21:04:06.109949  199316 default_sa.go:55] duration metric: took 196.982799ms for default service account to be created ...
	I0122 21:04:06.109979  199316 system_pods.go:137] waiting for k8s-apps to be running ...
	I0122 21:04:06.311512  199316 system_pods.go:87] 8 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-061998 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-061998 -n default-k8s-diff-port-061998
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-061998 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575 sudo cat                | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575 sudo cat                | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575 sudo cat                | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	| delete  | -p no-preload-086882                                 | no-preload-086882         | jenkins | v1.35.0 | 22 Jan 25 21:30 UTC | 22 Jan 25 21:30 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 21:13:52
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 21:13:52.462030  212748 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:13:52.462138  212748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:13:52.462146  212748 out.go:358] Setting ErrFile to fd 2...
	I0122 21:13:52.462149  212748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:13:52.462330  212748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 21:13:52.462930  212748 out.go:352] Setting JSON to false
	I0122 21:13:52.464076  212748 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10567,"bootTime":1737569865,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:13:52.464180  212748 start.go:139] virtualization: kvm guest
	I0122 21:13:52.466534  212748 out.go:177] * [enable-default-cni-988575] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:13:52.467937  212748 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:13:52.467980  212748 notify.go:220] Checking for updates...
	I0122 21:13:52.471304  212748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:13:52.472659  212748 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 21:13:52.474010  212748 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:13:52.475352  212748 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:13:52.476756  212748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:13:52.478672  212748 config.go:182] Loaded profile config "bridge-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:13:52.478793  212748 config.go:182] Loaded profile config "default-k8s-diff-port-061998": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:13:52.478905  212748 config.go:182] Loaded profile config "no-preload-086882": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:13:52.479019  212748 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:13:52.519334  212748 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 21:13:52.520533  212748 start.go:297] selected driver: kvm2
	I0122 21:13:52.520547  212748 start.go:901] validating driver "kvm2" against <nil>
	I0122 21:13:52.520561  212748 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:13:52.521312  212748 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:13:52.521426  212748 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-150966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:13:52.538996  212748 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:13:52.539045  212748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0122 21:13:52.539248  212748 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0122 21:13:52.539288  212748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:13:52.539343  212748 cni.go:84] Creating CNI manager for "bridge"
	I0122 21:13:52.539356  212748 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 21:13:52.539419  212748 start.go:340] cluster config:
	{Name:enable-default-cni-988575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-988575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:13:52.539516  212748 iso.go:125] acquiring lock: {Name:mkc3bf0604e328871936621dd0e0cda10261a449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:13:52.542335  212748 out.go:177] * Starting "enable-default-cni-988575" primary control-plane node in "enable-default-cni-988575" cluster
	I0122 21:13:52.543732  212748 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0122 21:13:52.543772  212748 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0122 21:13:52.543786  212748 cache.go:56] Caching tarball of preloaded images
	I0122 21:13:52.543865  212748 preload.go:172] Found /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 21:13:52.543879  212748 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0122 21:13:52.543999  212748 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/config.json ...
	I0122 21:13:52.544033  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/config.json: {Name:mk045d9ef235c448cc10a1d364b82bbe2bf70b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:13:52.544255  212748 start.go:360] acquireMachinesLock for enable-default-cni-988575: {Name:mkde076c0ff5ffaed1ac7d9ac4f697ecfb6e2cf2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:13:52.544319  212748 start.go:364] duration metric: took 37.18µs to acquireMachinesLock for "enable-default-cni-988575"
	I0122 21:13:52.544347  212748 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-988575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-de
fault-cni-988575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0122 21:13:52.544438  212748 start.go:125] createHost starting for "" (driver="kvm2")
	I0122 21:13:52.546145  212748 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0122 21:13:52.546289  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:13:52.546324  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:13:52.561887  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0122 21:13:52.562347  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:13:52.562865  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:13:52.562894  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:13:52.563253  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:13:52.563458  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetMachineName
	I0122 21:13:52.563604  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:13:52.563809  212748 start.go:159] libmachine.API.Create for "enable-default-cni-988575" (driver="kvm2")
	I0122 21:13:52.563853  212748 client.go:168] LocalClient.Create starting
	I0122 21:13:52.563881  212748 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem
	I0122 21:13:52.563908  212748 main.go:141] libmachine: Decoding PEM data...
	I0122 21:13:52.563932  212748 main.go:141] libmachine: Parsing certificate...
	I0122 21:13:52.564004  212748 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem
	I0122 21:13:52.564043  212748 main.go:141] libmachine: Decoding PEM data...
	I0122 21:13:52.564056  212748 main.go:141] libmachine: Parsing certificate...
	I0122 21:13:52.564077  212748 main.go:141] libmachine: Running pre-create checks...
	I0122 21:13:52.564089  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .PreCreateCheck
	I0122 21:13:52.564477  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetConfigRaw
	I0122 21:13:52.564860  212748 main.go:141] libmachine: Creating machine...
	I0122 21:13:52.564875  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Create
	I0122 21:13:52.565022  212748 main.go:141] libmachine: (enable-default-cni-988575) creating KVM machine...
	I0122 21:13:52.565041  212748 main.go:141] libmachine: (enable-default-cni-988575) creating network...
	I0122 21:13:52.566491  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found existing default KVM network
	I0122 21:13:52.567956  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.567804  212772 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:53:b5:67} reservation:<nil>}
	I0122 21:13:52.568844  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.568747  212772 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:eb:73} reservation:<nil>}
	I0122 21:13:52.569887  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.569789  212772 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:c9:b0:05} reservation:<nil>}
	I0122 21:13:52.570978  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.570894  212772 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003ea4f0}
	I0122 21:13:52.571054  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | created network xml: 
	I0122 21:13:52.571080  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | <network>
	I0122 21:13:52.571091  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   <name>mk-enable-default-cni-988575</name>
	I0122 21:13:52.571104  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   <dns enable='no'/>
	I0122 21:13:52.571116  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   
	I0122 21:13:52.571129  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0122 21:13:52.571138  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |     <dhcp>
	I0122 21:13:52.571143  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0122 21:13:52.571149  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |     </dhcp>
	I0122 21:13:52.571159  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   </ip>
	I0122 21:13:52.571166  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   
	I0122 21:13:52.571173  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | </network>
	I0122 21:13:52.571181  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | 
	I0122 21:13:52.576943  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | trying to create private KVM network mk-enable-default-cni-988575 192.168.72.0/24...
	I0122 21:13:52.651263  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | private KVM network mk-enable-default-cni-988575 192.168.72.0/24 created
	I0122 21:13:52.651299  212748 main.go:141] libmachine: (enable-default-cni-988575) setting up store path in /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575 ...
	I0122 21:13:52.651317  212748 main.go:141] libmachine: (enable-default-cni-988575) building disk image from file:///home/jenkins/minikube-integration/20288-150966/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 21:13:52.651377  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.651311  212772 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:13:52.651444  212748 main.go:141] libmachine: (enable-default-cni-988575) Downloading /home/jenkins/minikube-integration/20288-150966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20288-150966/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0122 21:13:52.937242  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.937110  212772 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa...
	I0122 21:13:53.068321  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:53.068202  212772 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/enable-default-cni-988575.rawdisk...
	I0122 21:13:53.068350  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Writing magic tar header
	I0122 21:13:53.068364  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Writing SSH key tar header
	I0122 21:13:53.068373  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:53.068345  212772 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575 ...
	I0122 21:13:53.068500  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575
	I0122 21:13:53.068555  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube/machines
	I0122 21:13:53.068584  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575 (perms=drwx------)
	I0122 21:13:53.068596  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:13:53.068614  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966
	I0122 21:13:53.068626  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0122 21:13:53.068640  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins
	I0122 21:13:53.068650  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home
	I0122 21:13:53.068660  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | skipping /home - not owner
	I0122 21:13:53.068674  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube/machines (perms=drwxr-xr-x)
	I0122 21:13:53.068690  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube (perms=drwxr-xr-x)
	I0122 21:13:53.068701  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration/20288-150966 (perms=drwxrwxr-x)
	I0122 21:13:53.068714  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0122 21:13:53.068724  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0122 21:13:53.068734  212748 main.go:141] libmachine: (enable-default-cni-988575) creating domain...
	I0122 21:13:53.069937  212748 main.go:141] libmachine: (enable-default-cni-988575) define libvirt domain using xml: 
	I0122 21:13:53.069970  212748 main.go:141] libmachine: (enable-default-cni-988575) <domain type='kvm'>
	I0122 21:13:53.069981  212748 main.go:141] libmachine: (enable-default-cni-988575)   <name>enable-default-cni-988575</name>
	I0122 21:13:53.069995  212748 main.go:141] libmachine: (enable-default-cni-988575)   <memory unit='MiB'>3072</memory>
	I0122 21:13:53.070004  212748 main.go:141] libmachine: (enable-default-cni-988575)   <vcpu>2</vcpu>
	I0122 21:13:53.070017  212748 main.go:141] libmachine: (enable-default-cni-988575)   <features>
	I0122 21:13:53.070023  212748 main.go:141] libmachine: (enable-default-cni-988575)     <acpi/>
	I0122 21:13:53.070028  212748 main.go:141] libmachine: (enable-default-cni-988575)     <apic/>
	I0122 21:13:53.070036  212748 main.go:141] libmachine: (enable-default-cni-988575)     <pae/>
	I0122 21:13:53.070043  212748 main.go:141] libmachine: (enable-default-cni-988575)     
	I0122 21:13:53.070055  212748 main.go:141] libmachine: (enable-default-cni-988575)   </features>
	I0122 21:13:53.070063  212748 main.go:141] libmachine: (enable-default-cni-988575)   <cpu mode='host-passthrough'>
	I0122 21:13:53.070076  212748 main.go:141] libmachine: (enable-default-cni-988575)   
	I0122 21:13:53.070088  212748 main.go:141] libmachine: (enable-default-cni-988575)   </cpu>
	I0122 21:13:53.070097  212748 main.go:141] libmachine: (enable-default-cni-988575)   <os>
	I0122 21:13:53.070107  212748 main.go:141] libmachine: (enable-default-cni-988575)     <type>hvm</type>
	I0122 21:13:53.070115  212748 main.go:141] libmachine: (enable-default-cni-988575)     <boot dev='cdrom'/>
	I0122 21:13:53.070127  212748 main.go:141] libmachine: (enable-default-cni-988575)     <boot dev='hd'/>
	I0122 21:13:53.070136  212748 main.go:141] libmachine: (enable-default-cni-988575)     <bootmenu enable='no'/>
	I0122 21:13:53.070140  212748 main.go:141] libmachine: (enable-default-cni-988575)   </os>
	I0122 21:13:53.070145  212748 main.go:141] libmachine: (enable-default-cni-988575)   <devices>
	I0122 21:13:53.070149  212748 main.go:141] libmachine: (enable-default-cni-988575)     <disk type='file' device='cdrom'>
	I0122 21:13:53.070158  212748 main.go:141] libmachine: (enable-default-cni-988575)       <source file='/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/boot2docker.iso'/>
	I0122 21:13:53.070166  212748 main.go:141] libmachine: (enable-default-cni-988575)       <target dev='hdc' bus='scsi'/>
	I0122 21:13:53.070171  212748 main.go:141] libmachine: (enable-default-cni-988575)       <readonly/>
	I0122 21:13:53.070176  212748 main.go:141] libmachine: (enable-default-cni-988575)     </disk>
	I0122 21:13:53.070181  212748 main.go:141] libmachine: (enable-default-cni-988575)     <disk type='file' device='disk'>
	I0122 21:13:53.070187  212748 main.go:141] libmachine: (enable-default-cni-988575)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0122 21:13:53.070195  212748 main.go:141] libmachine: (enable-default-cni-988575)       <source file='/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/enable-default-cni-988575.rawdisk'/>
	I0122 21:13:53.070199  212748 main.go:141] libmachine: (enable-default-cni-988575)       <target dev='hda' bus='virtio'/>
	I0122 21:13:53.070204  212748 main.go:141] libmachine: (enable-default-cni-988575)     </disk>
	I0122 21:13:53.070208  212748 main.go:141] libmachine: (enable-default-cni-988575)     <interface type='network'>
	I0122 21:13:53.070214  212748 main.go:141] libmachine: (enable-default-cni-988575)       <source network='mk-enable-default-cni-988575'/>
	I0122 21:13:53.070218  212748 main.go:141] libmachine: (enable-default-cni-988575)       <model type='virtio'/>
	I0122 21:13:53.070223  212748 main.go:141] libmachine: (enable-default-cni-988575)     </interface>
	I0122 21:13:53.070227  212748 main.go:141] libmachine: (enable-default-cni-988575)     <interface type='network'>
	I0122 21:13:53.070232  212748 main.go:141] libmachine: (enable-default-cni-988575)       <source network='default'/>
	I0122 21:13:53.070236  212748 main.go:141] libmachine: (enable-default-cni-988575)       <model type='virtio'/>
	I0122 21:13:53.070241  212748 main.go:141] libmachine: (enable-default-cni-988575)     </interface>
	I0122 21:13:53.070249  212748 main.go:141] libmachine: (enable-default-cni-988575)     <serial type='pty'>
	I0122 21:13:53.070254  212748 main.go:141] libmachine: (enable-default-cni-988575)       <target port='0'/>
	I0122 21:13:53.070263  212748 main.go:141] libmachine: (enable-default-cni-988575)     </serial>
	I0122 21:13:53.070268  212748 main.go:141] libmachine: (enable-default-cni-988575)     <console type='pty'>
	I0122 21:13:53.070277  212748 main.go:141] libmachine: (enable-default-cni-988575)       <target type='serial' port='0'/>
	I0122 21:13:53.070317  212748 main.go:141] libmachine: (enable-default-cni-988575)     </console>
	I0122 21:13:53.070339  212748 main.go:141] libmachine: (enable-default-cni-988575)     <rng model='virtio'>
	I0122 21:13:53.070352  212748 main.go:141] libmachine: (enable-default-cni-988575)       <backend model='random'>/dev/random</backend>
	I0122 21:13:53.070359  212748 main.go:141] libmachine: (enable-default-cni-988575)     </rng>
	I0122 21:13:53.070367  212748 main.go:141] libmachine: (enable-default-cni-988575)     
	I0122 21:13:53.070373  212748 main.go:141] libmachine: (enable-default-cni-988575)     
	I0122 21:13:53.070382  212748 main.go:141] libmachine: (enable-default-cni-988575)   </devices>
	I0122 21:13:53.070388  212748 main.go:141] libmachine: (enable-default-cni-988575) </domain>
	I0122 21:13:53.070404  212748 main.go:141] libmachine: (enable-default-cni-988575) 
	I0122 21:13:53.075192  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:a5:14:6c in network default
	I0122 21:13:53.075828  212748 main.go:141] libmachine: (enable-default-cni-988575) starting domain...
	I0122 21:13:53.075858  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:53.075867  212748 main.go:141] libmachine: (enable-default-cni-988575) ensuring networks are active...
	I0122 21:13:53.076644  212748 main.go:141] libmachine: (enable-default-cni-988575) Ensuring network default is active
	I0122 21:13:53.076984  212748 main.go:141] libmachine: (enable-default-cni-988575) Ensuring network mk-enable-default-cni-988575 is active
	I0122 21:13:53.077543  212748 main.go:141] libmachine: (enable-default-cni-988575) getting domain XML...
	I0122 21:13:53.078350  212748 main.go:141] libmachine: (enable-default-cni-988575) creating domain...
	I0122 21:13:54.434169  212748 main.go:141] libmachine: (enable-default-cni-988575) waiting for IP...
	I0122 21:13:54.435047  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:54.435503  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:54.435567  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:54.435496  212772 retry.go:31] will retry after 260.723128ms: waiting for domain to come up
	I0122 21:13:54.698112  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:54.698752  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:54.698808  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:54.698738  212772 retry.go:31] will retry after 344.421038ms: waiting for domain to come up
	I0122 21:13:55.045156  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:55.045738  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:55.045843  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:55.045724  212772 retry.go:31] will retry after 460.672457ms: waiting for domain to come up
	I0122 21:13:55.508426  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:55.509111  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:55.509142  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:55.509084  212772 retry.go:31] will retry after 539.824691ms: waiting for domain to come up
	I0122 21:13:56.050990  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:56.051505  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:56.051543  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:56.051454  212772 retry.go:31] will retry after 578.212643ms: waiting for domain to come up
	I0122 21:13:56.631107  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:56.631646  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:56.631720  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:56.631610  212772 retry.go:31] will retry after 658.680433ms: waiting for domain to come up
	I0122 21:13:57.291529  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:57.292055  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:57.292088  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:57.292032  212772 retry.go:31] will retry after 1.151478398s: waiting for domain to come up
	I0122 21:13:58.445714  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:58.446251  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:58.446292  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:58.446217  212772 retry.go:31] will retry after 904.224441ms: waiting for domain to come up
	I0122 21:13:59.352476  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:59.353064  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:59.353089  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:59.353039  212772 retry.go:31] will retry after 1.500303009s: waiting for domain to come up
	I0122 21:14:00.855018  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:00.855482  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:00.855509  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:00.855435  212772 retry.go:31] will retry after 1.760740196s: waiting for domain to come up
	I0122 21:14:02.617581  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:02.618106  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:02.618135  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:02.618070  212772 retry.go:31] will retry after 2.14599391s: waiting for domain to come up
	I0122 21:14:04.766356  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:04.766927  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:04.766953  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:04.766832  212772 retry.go:31] will retry after 3.47274679s: waiting for domain to come up
	I0122 21:14:08.241224  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:08.241679  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:08.241704  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:08.241643  212772 retry.go:31] will retry after 4.474921851s: waiting for domain to come up
	I0122 21:14:12.718227  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:12.718877  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:12.718908  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:12.718845  212772 retry.go:31] will retry after 5.670113196s: waiting for domain to come up
	I0122 21:14:18.390428  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.390974  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has current primary IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.391006  212748 main.go:141] libmachine: (enable-default-cni-988575) found domain IP: 192.168.72.236
	I0122 21:14:18.391015  212748 main.go:141] libmachine: (enable-default-cni-988575) reserving static IP address...
	I0122 21:14:18.391415  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-988575", mac: "52:54:00:2a:3f:25", ip: "192.168.72.236"} in network mk-enable-default-cni-988575
	I0122 21:14:18.465163  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Getting to WaitForSSH function...
	I0122 21:14:18.465201  212748 main.go:141] libmachine: (enable-default-cni-988575) reserved static IP address 192.168.72.236 for domain enable-default-cni-988575
	I0122 21:14:18.465215  212748 main.go:141] libmachine: (enable-default-cni-988575) waiting for SSH...
	I0122 21:14:18.468087  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.468463  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.468497  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.468668  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Using SSH client type: external
	I0122 21:14:18.468691  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa (-rw-------)
	I0122 21:14:18.468735  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:14:18.468754  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | About to run SSH command:
	I0122 21:14:18.468770  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | exit 0
	I0122 21:14:18.594036  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | SSH cmd err, output: <nil>: 
	I0122 21:14:18.594316  212748 main.go:141] libmachine: (enable-default-cni-988575) KVM machine creation complete
	I0122 21:14:18.594638  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetConfigRaw
	I0122 21:14:18.595194  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:18.595358  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:18.595517  212748 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0122 21:14:18.595534  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetState
	I0122 21:14:18.597006  212748 main.go:141] libmachine: Detecting operating system of created instance...
	I0122 21:14:18.597022  212748 main.go:141] libmachine: Waiting for SSH to be available...
	I0122 21:14:18.597030  212748 main.go:141] libmachine: Getting to WaitForSSH function...
	I0122 21:14:18.597038  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:18.599567  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.599989  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.600019  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.600146  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:18.600366  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.600523  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.600649  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:18.600873  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:18.601079  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:18.601096  212748 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0122 21:14:18.709367  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:14:18.709394  212748 main.go:141] libmachine: Detecting the provisioner...
	I0122 21:14:18.709405  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:18.712583  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.712901  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.712932  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.713098  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:18.713315  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.713460  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.713577  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:18.713743  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:18.713891  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:18.713902  212748 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0122 21:14:18.822488  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0122 21:14:18.822567  212748 main.go:141] libmachine: found compatible host: buildroot
	I0122 21:14:18.822582  212748 main.go:141] libmachine: Provisioning with buildroot...
	I0122 21:14:18.822594  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetMachineName
	I0122 21:14:18.822851  212748 buildroot.go:166] provisioning hostname "enable-default-cni-988575"
	I0122 21:14:18.822885  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetMachineName
	I0122 21:14:18.823114  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:18.825940  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.826303  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.826335  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.826494  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:18.826678  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.826831  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.826996  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:18.827154  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:18.827343  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:18.827361  212748 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-988575 && echo "enable-default-cni-988575" | sudo tee /etc/hostname
	I0122 21:14:18.947616  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-988575
	
	I0122 21:14:18.947647  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:18.950553  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.950947  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.950972  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.951225  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:18.951446  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.951599  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.951750  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:18.951984  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:18.952170  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:18.952189  212748 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-988575' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-988575/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-988575' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:14:19.066558  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:14:19.066589  212748 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-150966/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-150966/.minikube}
	I0122 21:14:19.066631  212748 buildroot.go:174] setting up certificates
	I0122 21:14:19.066642  212748 provision.go:84] configureAuth start
	I0122 21:14:19.066655  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetMachineName
	I0122 21:14:19.066952  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetIP
	I0122 21:14:19.069744  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.070117  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.070149  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.070288  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.072309  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.072607  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.072637  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.072705  212748 provision.go:143] copyHostCerts
	I0122 21:14:19.072795  212748 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem, removing ...
	I0122 21:14:19.072807  212748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem
	I0122 21:14:19.072873  212748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem (1675 bytes)
	I0122 21:14:19.073012  212748 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem, removing ...
	I0122 21:14:19.073023  212748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem
	I0122 21:14:19.073050  212748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem (1078 bytes)
	I0122 21:14:19.073114  212748 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem, removing ...
	I0122 21:14:19.073121  212748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem
	I0122 21:14:19.073141  212748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem (1123 bytes)
	I0122 21:14:19.073199  212748 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-988575 san=[127.0.0.1 192.168.72.236 enable-default-cni-988575 localhost minikube]
	I0122 21:14:19.172137  212748 provision.go:177] copyRemoteCerts
	I0122 21:14:19.172198  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:14:19.172221  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.175114  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.175491  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.175526  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.175686  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.175857  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.175975  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.176090  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:19.261340  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:14:19.286924  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 21:14:19.311436  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0122 21:14:19.335640  212748 provision.go:87] duration metric: took 268.982512ms to configureAuth
	I0122 21:14:19.335668  212748 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:14:19.335819  212748 config.go:182] Loaded profile config "enable-default-cni-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:14:19.335842  212748 main.go:141] libmachine: Checking connection to Docker...
	I0122 21:14:19.335856  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetURL
	I0122 21:14:19.337207  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | using libvirt version 6000000
	I0122 21:14:19.339361  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.339676  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.339709  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.339861  212748 main.go:141] libmachine: Docker is up and running!
	I0122 21:14:19.339875  212748 main.go:141] libmachine: Reticulating splines...
	I0122 21:14:19.339882  212748 client.go:171] duration metric: took 26.776019518s to LocalClient.Create
	I0122 21:14:19.339905  212748 start.go:167] duration metric: took 26.77609661s to libmachine.API.Create "enable-default-cni-988575"
	I0122 21:14:19.339918  212748 start.go:293] postStartSetup for "enable-default-cni-988575" (driver="kvm2")
	I0122 21:14:19.339931  212748 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:14:19.339959  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.340221  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:14:19.340253  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.342393  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.342696  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.342729  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.342842  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.342988  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.343108  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.343250  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:19.427771  212748 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:14:19.431650  212748 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:14:19.431684  212748 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/addons for local assets ...
	I0122 21:14:19.431763  212748 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/files for local assets ...
	I0122 21:14:19.431855  212748 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem -> 1582712.pem in /etc/ssl/certs
	I0122 21:14:19.431961  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:14:19.442056  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem --> /etc/ssl/certs/1582712.pem (1708 bytes)
	I0122 21:14:19.464446  212748 start.go:296] duration metric: took 124.512955ms for postStartSetup
	I0122 21:14:19.464511  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetConfigRaw
	I0122 21:14:19.465103  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetIP
	I0122 21:14:19.467761  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.468160  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.468192  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.468416  212748 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/config.json ...
	I0122 21:14:19.468600  212748 start.go:128] duration metric: took 26.924150387s to createHost
	I0122 21:14:19.468632  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.471643  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.472067  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.472100  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.472259  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.472452  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.472630  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.472773  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.472937  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:19.473132  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:19.473145  212748 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:14:19.586584  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737580459.566029650
	
	I0122 21:14:19.586607  212748 fix.go:216] guest clock: 1737580459.566029650
	I0122 21:14:19.586614  212748 fix.go:229] Guest: 2025-01-22 21:14:19.56602965 +0000 UTC Remote: 2025-01-22 21:14:19.468618964 +0000 UTC m=+27.045457740 (delta=97.410686ms)
	I0122 21:14:19.586639  212748 fix.go:200] guest clock delta is within tolerance: 97.410686ms
	I0122 21:14:19.586646  212748 start.go:83] releasing machines lock for "enable-default-cni-988575", held for 27.04231258s
	I0122 21:14:19.586671  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.586929  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetIP
	I0122 21:14:19.589854  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.590297  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.590336  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.590469  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.591039  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.591232  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.591336  212748 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:14:19.591397  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.591454  212748 ssh_runner.go:195] Run: cat /version.json
	I0122 21:14:19.591480  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.594144  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.594350  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.594515  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.594538  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.594669  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.594843  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.594856  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.594872  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.595048  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.595050  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.595281  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.595326  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:19.595477  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.595616  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:19.697764  212748 ssh_runner.go:195] Run: systemctl --version
	I0122 21:14:19.703608  212748 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:14:19.709903  212748 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:14:19.709995  212748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:14:19.725145  212748 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:14:19.725164  212748 start.go:495] detecting cgroup driver to use...
	I0122 21:14:19.725233  212748 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 21:14:19.754557  212748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 21:14:19.767298  212748 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:14:19.767357  212748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:14:19.781338  212748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:14:19.794364  212748 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:14:19.917036  212748 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:14:20.057993  212748 docker.go:233] disabling docker service ...
	I0122 21:14:20.058069  212748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:14:20.072068  212748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:14:20.084357  212748 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:14:20.232819  212748 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:14:20.364857  212748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:14:20.377774  212748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:14:20.395048  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0122 21:14:20.406101  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 21:14:20.417078  212748 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 21:14:20.417147  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 21:14:20.428174  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 21:14:20.438691  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 21:14:20.448932  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 21:14:20.459787  212748 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:14:20.470777  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 21:14:20.481308  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0122 21:14:20.491411  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0122 21:14:20.501617  212748 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:14:20.512416  212748 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:14:20.512475  212748 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:14:20.526215  212748 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:14:20.535803  212748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:14:20.658501  212748 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 21:14:20.686913  212748 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0122 21:14:20.687011  212748 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 21:14:20.694182  212748 retry.go:31] will retry after 1.006796171s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0122 21:14:21.701278  212748 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 21:14:21.707269  212748 start.go:563] Will wait 60s for crictl version
	I0122 21:14:21.707335  212748 ssh_runner.go:195] Run: which crictl
	I0122 21:14:21.711692  212748 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:14:21.749454  212748 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0122 21:14:21.749535  212748 ssh_runner.go:195] Run: containerd --version
	I0122 21:14:21.774308  212748 ssh_runner.go:195] Run: containerd --version
	I0122 21:14:21.801692  212748 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0122 21:14:21.803066  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetIP
	I0122 21:14:21.806023  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:21.806402  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:21.806434  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:21.806607  212748 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0122 21:14:21.810687  212748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:14:21.823144  212748 kubeadm.go:883] updating cluster {Name:enable-default-cni-988575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-988575 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.236 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:14:21.823250  212748 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0122 21:14:21.823307  212748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:14:21.855145  212748 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:14:21.855208  212748 ssh_runner.go:195] Run: which lz4
	I0122 21:14:21.858888  212748 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:14:21.862698  212748 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:14:21.862733  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398131433 bytes)
	I0122 21:14:23.148193  212748 containerd.go:563] duration metric: took 1.289327237s to copy over tarball
	I0122 21:14:23.148289  212748 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:14:25.356962  212748 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.208632175s)
	I0122 21:14:25.357001  212748 containerd.go:570] duration metric: took 2.208769374s to extract the tarball
	I0122 21:14:25.357013  212748 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:14:25.397308  212748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:14:25.516558  212748 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 21:14:25.547883  212748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:14:25.583791  212748 retry.go:31] will retry after 264.622937ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-22T21:14:25Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0122 21:14:25.849327  212748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:14:25.886521  212748 containerd.go:627] all images are preloaded for containerd runtime.
	I0122 21:14:25.886549  212748 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:14:25.886564  212748 kubeadm.go:934] updating node { 192.168.72.236 8443 v1.32.1 containerd true true} ...
	I0122 21:14:25.886700  212748 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-988575 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-988575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0122 21:14:25.886770  212748 ssh_runner.go:195] Run: sudo crictl info
	I0122 21:14:25.919854  212748 cni.go:84] Creating CNI manager for "bridge"
	I0122 21:14:25.919875  212748 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:14:25.919894  212748 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.236 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-988575 NodeName:enable-default-cni-988575 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:14:25.919989  212748 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-988575"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.236"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.236"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:14:25.920045  212748 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:14:25.931000  212748 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:14:25.931066  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:14:25.940134  212748 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0122 21:14:25.957006  212748 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:14:25.972902  212748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2321 bytes)
	I0122 21:14:25.988975  212748 ssh_runner.go:195] Run: grep 192.168.72.236	control-plane.minikube.internal$ /etc/hosts
	I0122 21:14:25.992647  212748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:14:26.004697  212748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:14:26.119955  212748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:14:26.140771  212748 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575 for IP: 192.168.72.236
	I0122 21:14:26.140794  212748 certs.go:194] generating shared ca certs ...
	I0122 21:14:26.140809  212748 certs.go:226] acquiring lock for ca certs: {Name:mk53e9e3df6ffb3fa8285a86887df441ff5826d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.140965  212748 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.key
	I0122 21:14:26.141008  212748 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.key
	I0122 21:14:26.141021  212748 certs.go:256] generating profile certs ...
	I0122 21:14:26.141078  212748 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.key
	I0122 21:14:26.141091  212748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt with IP's: []
	I0122 21:14:26.208946  212748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt ...
	I0122 21:14:26.208977  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: {Name:mk9883dcae0c1cd3f2f0a907151ab66214df6bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.246185  212748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.key ...
	I0122 21:14:26.246234  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.key: {Name:mk33633cded10207e2390ad08a3dd8fc1c7b5df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.271797  212748 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key.04d9a45f
	I0122 21:14:26.271867  212748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt.04d9a45f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.236]
	I0122 21:14:26.558342  212748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt.04d9a45f ...
	I0122 21:14:26.558372  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt.04d9a45f: {Name:mk023b50773fed80cc80f0a8399195809b6f6481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.558539  212748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key.04d9a45f ...
	I0122 21:14:26.558555  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key.04d9a45f: {Name:mkbd6f96068489529590a700ebae5eb8ec4ea1e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.558652  212748 certs.go:381] copying /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt.04d9a45f -> /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt
	I0122 21:14:26.558744  212748 certs.go:385] copying /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key.04d9a45f -> /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key
	I0122 21:14:26.558797  212748 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.key
	I0122 21:14:26.558813  212748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.crt with IP's: []
	I0122 21:14:26.728616  212748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.crt ...
	I0122 21:14:26.728653  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.crt: {Name:mk60d2d3357b997bcee82a68de0c9bab86dcbb59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.728839  212748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.key ...
	I0122 21:14:26.728856  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.key: {Name:mkb55e3f07cb505298a7cbb607001b0bfa7eb986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.729056  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271.pem (1338 bytes)
	W0122 21:14:26.729099  212748 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271_empty.pem, impossibly tiny 0 bytes
	I0122 21:14:26.729111  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:14:26.729133  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem (1078 bytes)
	I0122 21:14:26.729166  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:14:26.729187  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem (1675 bytes)
	I0122 21:14:26.729226  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem (1708 bytes)
	I0122 21:14:26.729797  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:14:26.755665  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0122 21:14:26.779724  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:14:26.806425  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0122 21:14:26.835884  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0122 21:14:26.866639  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:14:26.890368  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:14:26.912613  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:14:26.937566  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271.pem --> /usr/share/ca-certificates/158271.pem (1338 bytes)
	I0122 21:14:26.960509  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem --> /usr/share/ca-certificates/1582712.pem (1708 bytes)
	I0122 21:14:26.983691  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:14:27.007053  212748 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:14:27.024722  212748 ssh_runner.go:195] Run: openssl version
	I0122 21:14:27.030397  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582712.pem && ln -fs /usr/share/ca-certificates/1582712.pem /etc/ssl/certs/1582712.pem"
	I0122 21:14:27.042485  212748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582712.pem
	I0122 21:14:27.046760  212748 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:06 /usr/share/ca-certificates/1582712.pem
	I0122 21:14:27.046822  212748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582712.pem
	I0122 21:14:27.052554  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1582712.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:14:27.064452  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:14:27.076200  212748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:14:27.080539  212748 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:14:27.080592  212748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:14:27.086105  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:14:27.096656  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/158271.pem && ln -fs /usr/share/ca-certificates/158271.pem /etc/ssl/certs/158271.pem"
	I0122 21:14:27.107085  212748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/158271.pem
	I0122 21:14:27.111204  212748 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:06 /usr/share/ca-certificates/158271.pem
	I0122 21:14:27.111264  212748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/158271.pem
	I0122 21:14:27.116650  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/158271.pem /etc/ssl/certs/51391683.0"
	I0122 21:14:27.130386  212748 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:14:27.134455  212748 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0122 21:14:27.134507  212748 kubeadm.go:392] StartCluster: {Name:enable-default-cni-988575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-988575 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.236 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:14:27.134606  212748 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0122 21:14:27.134689  212748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:14:27.174454  212748 cri.go:89] found id: ""
	I0122 21:14:27.174525  212748 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:14:27.187319  212748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:14:27.196689  212748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:14:27.207555  212748 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:14:27.207592  212748 kubeadm.go:157] found existing configuration files:
	
	I0122 21:14:27.207634  212748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:14:27.216519  212748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:14:27.216577  212748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:14:27.226617  212748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:14:27.236183  212748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:14:27.236259  212748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:14:27.245822  212748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:14:27.254665  212748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:14:27.254722  212748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:14:27.264848  212748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:14:27.273731  212748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:14:27.273810  212748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:14:27.283009  212748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:14:27.333040  212748 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0122 21:14:27.333164  212748 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:14:27.431695  212748 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:14:27.431822  212748 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:14:27.431956  212748 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0122 21:14:27.442198  212748 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:14:27.490172  212748 out.go:235]   - Generating certificates and keys ...
	I0122 21:14:27.490295  212748 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:14:27.490384  212748 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:14:27.570591  212748 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 21:14:27.685569  212748 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0122 21:14:27.785177  212748 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0122 21:14:27.976556  212748 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0122 21:14:28.097838  212748 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0122 21:14:28.098048  212748 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-988575 localhost] and IPs [192.168.72.236 127.0.0.1 ::1]
	I0122 21:14:28.185800  212748 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0122 21:14:28.186044  212748 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-988575 localhost] and IPs [192.168.72.236 127.0.0.1 ::1]
	I0122 21:14:28.286073  212748 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 21:14:28.486672  212748 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 21:14:28.568468  212748 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0122 21:14:28.568563  212748 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:14:28.976287  212748 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:14:29.146740  212748 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0122 21:14:29.595476  212748 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:14:29.847221  212748 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:14:30.156659  212748 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:14:30.157193  212748 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:14:30.159563  212748 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:14:30.161558  212748 out.go:235]   - Booting up control plane ...
	I0122 21:14:30.161681  212748 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:14:30.161787  212748 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:14:30.161901  212748 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:14:30.178285  212748 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:14:30.184859  212748 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:14:30.184917  212748 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:14:30.320444  212748 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0122 21:14:30.320643  212748 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0122 21:14:31.321913  212748 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001924991s
	I0122 21:14:31.322028  212748 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0122 21:14:35.821765  212748 kubeadm.go:310] [api-check] The API server is healthy after 4.501929141s
	I0122 21:14:35.833862  212748 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 21:14:35.848628  212748 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 21:14:35.870989  212748 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0122 21:14:35.871171  212748 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-988575 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 21:14:35.885261  212748 kubeadm.go:310] [bootstrap-token] Using token: df9fky.0iinyjuwhr05t9v8
	I0122 21:14:35.886772  212748 out.go:235]   - Configuring RBAC rules ...
	I0122 21:14:35.886911  212748 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 21:14:35.893522  212748 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 21:14:35.901172  212748 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 21:14:35.904477  212748 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 21:14:35.907919  212748 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 21:14:35.911173  212748 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 21:14:36.228855  212748 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 21:14:36.653094  212748 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0122 21:14:37.227413  212748 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0122 21:14:37.228201  212748 kubeadm.go:310] 
	I0122 21:14:37.228286  212748 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0122 21:14:37.228297  212748 kubeadm.go:310] 
	I0122 21:14:37.228370  212748 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0122 21:14:37.228378  212748 kubeadm.go:310] 
	I0122 21:14:37.228409  212748 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0122 21:14:37.228501  212748 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 21:14:37.228560  212748 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 21:14:37.228570  212748 kubeadm.go:310] 
	I0122 21:14:37.228651  212748 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0122 21:14:37.228661  212748 kubeadm.go:310] 
	I0122 21:14:37.228728  212748 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 21:14:37.228741  212748 kubeadm.go:310] 
	I0122 21:14:37.228795  212748 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0122 21:14:37.228860  212748 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 21:14:37.228932  212748 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 21:14:37.228941  212748 kubeadm.go:310] 
	I0122 21:14:37.229080  212748 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0122 21:14:37.229194  212748 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0122 21:14:37.229204  212748 kubeadm.go:310] 
	I0122 21:14:37.229320  212748 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token df9fky.0iinyjuwhr05t9v8 \
	I0122 21:14:37.229465  212748 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88af174ababab22d3fd32c76a81e6e1b2f6ebf2a7a258215c191241a8730421a \
	I0122 21:14:37.229497  212748 kubeadm.go:310] 	--control-plane 
	I0122 21:14:37.229506  212748 kubeadm.go:310] 
	I0122 21:14:37.229654  212748 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0122 21:14:37.229671  212748 kubeadm.go:310] 
	I0122 21:14:37.229786  212748 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token df9fky.0iinyjuwhr05t9v8 \
	I0122 21:14:37.229908  212748 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88af174ababab22d3fd32c76a81e6e1b2f6ebf2a7a258215c191241a8730421a 
	I0122 21:14:37.231087  212748 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:14:37.231117  212748 cni.go:84] Creating CNI manager for "bridge"
	I0122 21:14:37.233453  212748 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:14:37.234647  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:14:37.246007  212748 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:14:37.265630  212748 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:14:37.265768  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:37.265791  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-988575 minikube.k8s.io/updated_at=2025_01_22T21_14_37_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4 minikube.k8s.io/name=enable-default-cni-988575 minikube.k8s.io/primary=true
	I0122 21:14:37.284648  212748 ops.go:34] apiserver oom_adj: -16
	I0122 21:14:37.375150  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:37.875733  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:38.375457  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:38.875854  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:39.375610  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:39.875900  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:40.375504  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:40.875942  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:41.376236  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:41.479382  212748 kubeadm.go:1113] duration metric: took 4.213688497s to wait for elevateKubeSystemPrivileges
	I0122 21:14:41.479425  212748 kubeadm.go:394] duration metric: took 14.344921437s to StartCluster
	I0122 21:14:41.479449  212748 settings.go:142] acquiring lock: {Name:mkfbfc304d1e9b2b80529e33af6a426e89d118a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:41.479527  212748 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 21:14:41.481154  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/kubeconfig: {Name:mk70478f45a79a3b41e7b46029f97939b1511ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:41.481438  212748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0122 21:14:41.481456  212748 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:14:41.481543  212748 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-988575"
	I0122 21:14:41.481561  212748 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-988575"
	I0122 21:14:41.481591  212748 host.go:66] Checking if "enable-default-cni-988575" exists ...
	I0122 21:14:41.481434  212748 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.236 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0122 21:14:41.481625  212748 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-988575"
	I0122 21:14:41.481647  212748 config.go:182] Loaded profile config "enable-default-cni-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:14:41.481661  212748 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-988575"
	I0122 21:14:41.482060  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.482082  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.482093  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.482114  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.483953  212748 out.go:177] * Verifying Kubernetes components...
	I0122 21:14:41.485418  212748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:14:41.498219  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I0122 21:14:41.498819  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.499472  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.499506  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.499869  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.500149  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0122 21:14:41.500155  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetState
	I0122 21:14:41.500577  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.501134  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.501152  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.501532  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.502161  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.502189  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.503982  212748 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-988575"
	I0122 21:14:41.504030  212748 host.go:66] Checking if "enable-default-cni-988575" exists ...
	I0122 21:14:41.504412  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.504465  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.520906  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0122 21:14:41.521338  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.521861  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.521887  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.522373  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.522604  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetState
	I0122 21:14:41.524388  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:41.526057  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0122 21:14:41.526074  212748 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:14:41.526518  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.527106  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.527131  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.527533  212748 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:14:41.527551  212748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:14:41.527565  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:41.527680  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.528088  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.528119  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.530246  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:41.530628  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:41.530645  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:41.530846  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:41.530989  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:41.531078  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:41.531905  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:41.551055  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0122 21:14:41.551680  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.552329  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.552355  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.552920  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.553124  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetState
	I0122 21:14:41.554736  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:41.554997  212748 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:14:41.555014  212748 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:14:41.555033  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:41.558034  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:41.558472  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:41.558498  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:41.558719  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:41.558959  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:41.559140  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:41.559327  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:41.741072  212748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0122 21:14:41.741111  212748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:14:41.830050  212748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:14:41.850093  212748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:14:42.337101  212748 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0122 21:14:42.338167  212748 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-988575" to be "Ready" ...
	I0122 21:14:42.355841  212748 node_ready.go:49] node "enable-default-cni-988575" has status "Ready":"True"
	I0122 21:14:42.355877  212748 node_ready.go:38] duration metric: took 17.683559ms for node "enable-default-cni-988575" to be "Ready" ...
	I0122 21:14:42.355890  212748 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:14:42.384983  212748 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace to be "Ready" ...
	I0122 21:14:42.805295  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.805330  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.805339  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.805350  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.805621  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Closing plugin on server side
	I0122 21:14:42.805621  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.805644  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.805653  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.805660  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.805667  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Closing plugin on server side
	I0122 21:14:42.805695  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.805704  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.805720  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.805728  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.806039  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Closing plugin on server side
	I0122 21:14:42.806042  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Closing plugin on server side
	I0122 21:14:42.806052  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.806074  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.806080  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.806088  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.820226  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.820246  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.820552  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.820571  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.822239  212748 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0122 21:14:42.823426  212748 addons.go:514] duration metric: took 1.34196753s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0122 21:14:42.846707  212748 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-988575" context rescaled to 1 replicas
	I0122 21:14:44.391239  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:46.890787  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:48.891457  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:50.892478  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:53.390442  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:55.390937  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:57.391475  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:59.890363  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:01.891101  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:03.891544  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:06.391260  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:08.891979  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:11.391874  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:13.890858  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:14.391385  212748 pod_ready.go:93] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.391414  212748 pod_ready.go:82] duration metric: took 32.006398889s for pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.391431  212748 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-t62dc" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.393204  212748 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-t62dc" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-t62dc" not found
	I0122 21:15:14.393231  212748 pod_ready.go:82] duration metric: took 1.79275ms for pod "coredns-668d6bf9bc-t62dc" in "kube-system" namespace to be "Ready" ...
	E0122 21:15:14.393241  212748 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-t62dc" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-t62dc" not found
	I0122 21:15:14.393252  212748 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.397371  212748 pod_ready.go:93] pod "etcd-enable-default-cni-988575" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.397397  212748 pod_ready.go:82] duration metric: took 4.137052ms for pod "etcd-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.397406  212748 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.401206  212748 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-988575" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.401224  212748 pod_ready.go:82] duration metric: took 3.811097ms for pod "kube-apiserver-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.401235  212748 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.405039  212748 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-988575" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.405056  212748 pod_ready.go:82] duration metric: took 3.815782ms for pod "kube-controller-manager-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.405064  212748 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-pqfgf" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.588746  212748 pod_ready.go:93] pod "kube-proxy-pqfgf" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.588771  212748 pod_ready.go:82] duration metric: took 183.700915ms for pod "kube-proxy-pqfgf" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.588781  212748 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.988925  212748 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-988575" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.988961  212748 pod_ready.go:82] duration metric: took 400.171514ms for pod "kube-scheduler-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.988974  212748 pod_ready.go:39] duration metric: took 32.633070501s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:15:14.988998  212748 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:15:14.989065  212748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:15:15.003050  212748 api_server.go:72] duration metric: took 33.521423742s to wait for apiserver process to appear ...
	I0122 21:15:15.003081  212748 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:15:15.003104  212748 api_server.go:253] Checking apiserver healthz at https://192.168.72.236:8443/healthz ...
	I0122 21:15:15.007405  212748 api_server.go:279] https://192.168.72.236:8443/healthz returned 200:
	ok
	I0122 21:15:15.008265  212748 api_server.go:141] control plane version: v1.32.1
	I0122 21:15:15.008291  212748 api_server.go:131] duration metric: took 5.201626ms to wait for apiserver health ...
	I0122 21:15:15.008300  212748 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:15:15.190943  212748 system_pods.go:59] 7 kube-system pods found
	I0122 21:15:15.190980  212748 system_pods.go:61] "coredns-668d6bf9bc-8k2mr" [e3982f26-ae3b-4628-99a6-4d6cbcf75579] Running
	I0122 21:15:15.190986  212748 system_pods.go:61] "etcd-enable-default-cni-988575" [a3418942-728d-4bcd-a56a-b1b40b3c9480] Running
	I0122 21:15:15.190990  212748 system_pods.go:61] "kube-apiserver-enable-default-cni-988575" [50840094-887a-4220-8537-bc0aa3e0096f] Running
	I0122 21:15:15.190993  212748 system_pods.go:61] "kube-controller-manager-enable-default-cni-988575" [33fba83d-f193-4951-bec6-060ab5644e77] Running
	I0122 21:15:15.190996  212748 system_pods.go:61] "kube-proxy-pqfgf" [dbfd454c-8d4f-41fc-b630-9687e1cc00de] Running
	I0122 21:15:15.190999  212748 system_pods.go:61] "kube-scheduler-enable-default-cni-988575" [f8e36fef-016a-4800-b727-629672d1dd3a] Running
	I0122 21:15:15.191002  212748 system_pods.go:61] "storage-provisioner" [de70f162-242c-4c9f-83be-78eb9d99e78b] Running
	I0122 21:15:15.191008  212748 system_pods.go:74] duration metric: took 182.701656ms to wait for pod list to return data ...
	I0122 21:15:15.191021  212748 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:15:15.389632  212748 default_sa.go:45] found service account: "default"
	I0122 21:15:15.389660  212748 default_sa.go:55] duration metric: took 198.632639ms for default service account to be created ...
	I0122 21:15:15.389673  212748 system_pods.go:137] waiting for k8s-apps to be running ...
	I0122 21:15:15.591099  212748 system_pods.go:87] 7 kube-system pods found
	I0122 21:15:15.789898  212748 system_pods.go:105] "coredns-668d6bf9bc-8k2mr" [e3982f26-ae3b-4628-99a6-4d6cbcf75579] Running
	I0122 21:15:15.789933  212748 system_pods.go:105] "etcd-enable-default-cni-988575" [a3418942-728d-4bcd-a56a-b1b40b3c9480] Running
	I0122 21:15:15.789943  212748 system_pods.go:105] "kube-apiserver-enable-default-cni-988575" [50840094-887a-4220-8537-bc0aa3e0096f] Running
	I0122 21:15:15.789969  212748 system_pods.go:105] "kube-controller-manager-enable-default-cni-988575" [33fba83d-f193-4951-bec6-060ab5644e77] Running
	I0122 21:15:15.789986  212748 system_pods.go:105] "kube-proxy-pqfgf" [dbfd454c-8d4f-41fc-b630-9687e1cc00de] Running
	I0122 21:15:15.789995  212748 system_pods.go:105] "kube-scheduler-enable-default-cni-988575" [f8e36fef-016a-4800-b727-629672d1dd3a] Running
	I0122 21:15:15.790008  212748 system_pods.go:105] "storage-provisioner" [de70f162-242c-4c9f-83be-78eb9d99e78b] Running
	I0122 21:15:15.790024  212748 system_pods.go:147] duration metric: took 400.342486ms to wait for k8s-apps to be running ...
	I0122 21:15:15.790039  212748 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 21:15:15.790104  212748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:15:15.805060  212748 system_svc.go:56] duration metric: took 15.009919ms WaitForService to wait for kubelet
	I0122 21:15:15.805095  212748 kubeadm.go:582] duration metric: took 34.323472111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:15:15.805117  212748 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:15:15.989985  212748 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:15:15.990024  212748 node_conditions.go:123] node cpu capacity is 2
	I0122 21:15:15.990040  212748 node_conditions.go:105] duration metric: took 184.917088ms to run NodePressure ...
	I0122 21:15:15.990057  212748 start.go:241] waiting for startup goroutines ...
	I0122 21:15:15.990067  212748 start.go:246] waiting for cluster config update ...
	I0122 21:15:15.990082  212748 start.go:255] writing updated cluster config ...
	I0122 21:15:15.990362  212748 ssh_runner.go:195] Run: rm -f paused
	I0122 21:15:16.038542  212748 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0122 21:15:16.040655  212748 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-988575" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c365fd8565af       6e38f40d628db       28 minutes ago      Running             storage-provisioner       1                   8d6ae3523a5b0       storage-provisioner
	c939047369d72       6e38f40d628db       29 minutes ago      Exited              storage-provisioner       0                   8d6ae3523a5b0       storage-provisioner
	6bbbaabb7433b       c69fa2e9cbf5f       29 minutes ago      Running             coredns                   0                   de6f410774846       coredns-668d6bf9bc-7g77x
	2cdfeb8366599       c69fa2e9cbf5f       29 minutes ago      Running             coredns                   0                   1fb22b66e7b90       coredns-668d6bf9bc-j6pzl
	dbb9db6df1827       e29f9c7391fd9       29 minutes ago      Running             kube-proxy                0                   438860aaaab86       kube-proxy-c68rw
	661ec50972f7e       95c0bda56fc4d       29 minutes ago      Running             kube-apiserver            0                   caaceaa1b76c6       kube-apiserver-default-k8s-diff-port-061998
	632fbcff689c2       2b0d6572d062c       29 minutes ago      Running             kube-scheduler            0                   4983f1c134652       kube-scheduler-default-k8s-diff-port-061998
	9c1c5c46d9d8e       019ee182b58e2       29 minutes ago      Running             kube-controller-manager   0                   494d3fb7a22d7       kube-controller-manager-default-k8s-diff-port-061998
	d771e44e2df9f       a9e7e6b294baf       29 minutes ago      Running             etcd                      0                   eba646ac804c9       etcd-default-k8s-diff-port-061998
	
	
	==> containerd <==
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.032487051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7g77x,Uid:e2ba4dbd-2805-4c6f-847b-65fd77ed65bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"de6f4107748460fa73dc04f339ee27cc16d63f5f7402f25f0bdb2cc0dbf48791\""
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.036385106Z" level=info msg="CreateContainer within sandbox \"de6f4107748460fa73dc04f339ee27cc16d63f5f7402f25f0bdb2cc0dbf48791\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.038158338Z" level=info msg="CreateContainer within sandbox \"1fb22b66e7b90625c18a014b494d1d6f0f44397308657d6f88d953329725d2c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cdfeb836659963ce3a00db9188f96deea3f1041c154036cc88c5e19625c672f\""
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.041610899Z" level=info msg="StartContainer for \"2cdfeb836659963ce3a00db9188f96deea3f1041c154036cc88c5e19625c672f\""
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.053209831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.053495323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.053720008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.054249813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.068657227Z" level=info msg="CreateContainer within sandbox \"de6f4107748460fa73dc04f339ee27cc16d63f5f7402f25f0bdb2cc0dbf48791\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bbbaabb7433b31190c7ea06afb638286be454cd4afd3cc51a9bd193d4e3b983\""
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.069866698Z" level=info msg="StartContainer for \"6bbbaabb7433b31190c7ea06afb638286be454cd4afd3cc51a9bd193d4e3b983\""
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.138446557Z" level=info msg="StartContainer for \"2cdfeb836659963ce3a00db9188f96deea3f1041c154036cc88c5e19625c672f\" returns successfully"
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.157124225Z" level=info msg="StartContainer for \"6bbbaabb7433b31190c7ea06afb638286be454cd4afd3cc51a9bd193d4e3b983\" returns successfully"
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.311445651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:d3fa8ed4-5942-4f60-8d8f-7e91a791f3bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d6ae3523a5b0cb8a2f26e879af05db08b6e6778f0b31bc2761e26a296a90c81\""
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.315702807Z" level=info msg="CreateContainer within sandbox \"8d6ae3523a5b0cb8a2f26e879af05db08b6e6778f0b31bc2761e26a296a90c81\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.333994687Z" level=info msg="CreateContainer within sandbox \"8d6ae3523a5b0cb8a2f26e879af05db08b6e6778f0b31bc2761e26a296a90c81\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"c939047369d72631e0480aeb4b2c215a114c105473344e8df5f643ad4177a59b\""
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.335171562Z" level=info msg="StartContainer for \"c939047369d72631e0480aeb4b2c215a114c105473344e8df5f643ad4177a59b\""
	Jan 22 21:03:31 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:31.422423845Z" level=info msg="StartContainer for \"c939047369d72631e0480aeb4b2c215a114c105473344e8df5f643ad4177a59b\" returns successfully"
	Jan 22 21:03:35 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:03:35.586841560Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Jan 22 21:04:01 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:04:01.465218213Z" level=info msg="shim disconnected" id=c939047369d72631e0480aeb4b2c215a114c105473344e8df5f643ad4177a59b namespace=k8s.io
	Jan 22 21:04:01 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:04:01.465575529Z" level=warning msg="cleaning up after shim disconnected" id=c939047369d72631e0480aeb4b2c215a114c105473344e8df5f643ad4177a59b namespace=k8s.io
	Jan 22 21:04:01 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:04:01.465684799Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 22 21:04:02 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:04:02.278619091Z" level=info msg="CreateContainer within sandbox \"8d6ae3523a5b0cb8a2f26e879af05db08b6e6778f0b31bc2761e26a296a90c81\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Jan 22 21:04:02 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:04:02.306347044Z" level=info msg="CreateContainer within sandbox \"8d6ae3523a5b0cb8a2f26e879af05db08b6e6778f0b31bc2761e26a296a90c81\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"8c365fd8565afdbcb12a5e51108ac45e2b1b907ae6058a99f46aef7efbb329ec\""
	Jan 22 21:04:02 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:04:02.307464688Z" level=info msg="StartContainer for \"8c365fd8565afdbcb12a5e51108ac45e2b1b907ae6058a99f46aef7efbb329ec\""
	Jan 22 21:04:02 default-k8s-diff-port-061998 containerd[646]: time="2025-01-22T21:04:02.381519752Z" level=info msg="StartContainer for \"8c365fd8565afdbcb12a5e51108ac45e2b1b907ae6058a99f46aef7efbb329ec\" returns successfully"
	
	
	==> coredns [2cdfeb836659963ce3a00db9188f96deea3f1041c154036cc88c5e19625c672f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57715 - 57336 "HINFO IN 3316770624381690972.6046129243354590908. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048029303s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[340449915]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (22-Jan-2025 21:03:31.197) (total time: 30001ms):
	Trace[340449915]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (21:04:01.198)
	Trace[340449915]: [30.001348048s] [30.001348048s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[341788905]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (22-Jan-2025 21:03:31.197) (total time: 30001ms):
	Trace[341788905]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (21:04:01.199)
	Trace[341788905]: [30.001378644s] [30.001378644s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[360369023]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (22-Jan-2025 21:03:31.198) (total time: 30001ms):
	Trace[360369023]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (21:04:01.199)
	Trace[360369023]: [30.001823173s] [30.001823173s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [6bbbaabb7433b31190c7ea06afb638286be454cd4afd3cc51a9bd193d4e3b983] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50026 - 59840 "HINFO IN 1366446576073275258.5182828258908412344. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.079753116s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[210176880]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (22-Jan-2025 21:03:31.208) (total time: 30001ms):
	Trace[210176880]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (21:04:01.209)
	Trace[210176880]: [30.001205734s] [30.001205734s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2056087937]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (22-Jan-2025 21:03:31.209) (total time: 30001ms):
	Trace[2056087937]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (21:04:01.210)
	Trace[2056087937]: [30.001021863s] [30.001021863s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[73313375]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (22-Jan-2025 21:03:31.209) (total time: 30001ms):
	Trace[73313375]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (21:04:01.210)
	Trace[73313375]: [30.001726208s] [30.001726208s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-061998
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-061998
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4
	                    minikube.k8s.io/name=default-k8s-diff-port-061998
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_22T21_03_25_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 Jan 2025 21:03:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-061998
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 Jan 2025 21:32:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 Jan 2025 21:30:46 +0000   Wed, 22 Jan 2025 21:03:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 Jan 2025 21:30:46 +0000   Wed, 22 Jan 2025 21:03:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 Jan 2025 21:30:46 +0000   Wed, 22 Jan 2025 21:03:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 Jan 2025 21:30:46 +0000   Wed, 22 Jan 2025 21:03:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.147
	  Hostname:    default-k8s-diff-port-061998
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d70cdda912104bc4bf44abf57d96914c
	  System UUID:                d70cdda9-1210-4bc4-bf44-abf57d96914c
	  Boot ID:                    cf06b85d-520c-4921-8297-253781f1d7e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-7g77x                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 coredns-668d6bf9bc-j6pzl                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-061998                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-061998             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-061998    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-c68rw                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-061998             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-061998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-061998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-061998 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-061998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-061998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-061998 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-061998 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-061998 event: Registered Node default-k8s-diff-port-061998 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048999] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038214] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.913008] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.884478] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Jan22 21:03] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.462213] systemd-fstab-generator[510]: Ignoring "noauto" option for root device
	[  +0.059729] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060117] systemd-fstab-generator[522]: Ignoring "noauto" option for root device
	[  +0.163410] systemd-fstab-generator[536]: Ignoring "noauto" option for root device
	[  +0.132056] systemd-fstab-generator[548]: Ignoring "noauto" option for root device
	[  +0.281231] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +4.556001] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.055524] kauditd_printk_skb: 158 callbacks suppressed
	[  +0.604793] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +4.464439] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.062330] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.510572] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +0.067090] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.809554] systemd-fstab-generator[1330]: Ignoring "noauto" option for root device
	[  +0.811984] kauditd_printk_skb: 46 callbacks suppressed
	[Jan22 21:04] kauditd_printk_skb: 75 callbacks suppressed
	
	
	==> etcd [d771e44e2df9f64edf6061d9b38ad0f86e76c30dab5b1cc155c285cec75b33e3] <==
	{"level":"warn","ts":"2025-01-22T21:12:42.589742Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-22T21:12:42.274347Z","time spent":"315.387993ms","remote":"127.0.0.1:49498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-22T21:13:17.750835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.388202ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:13:17.750911Z","caller":"traceutil/trace.go:171","msg":"trace[862839965] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:880; }","duration":"136.473952ms","start":"2025-01-22T21:13:17.614425Z","end":"2025-01-22T21:13:17.750899Z","steps":["trace[862839965] 'range keys from in-memory index tree'  (duration: 136.379413ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:13:18.724043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.735047ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:13:18.724174Z","caller":"traceutil/trace.go:171","msg":"trace[1583305935] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:881; }","duration":"109.878114ms","start":"2025-01-22T21:13:18.614276Z","end":"2025-01-22T21:13:18.724154Z","steps":["trace[1583305935] 'range keys from in-memory index tree'  (duration: 109.718076ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:13:18.724384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.449943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:13:18.724617Z","caller":"traceutil/trace.go:171","msg":"trace[1302507506] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:881; }","duration":"252.715161ms","start":"2025-01-22T21:13:18.471888Z","end":"2025-01-22T21:13:18.724604Z","steps":["trace[1302507506] 'range keys from in-memory index tree'  (duration: 252.136426ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:13:21.529121Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":643}
	{"level":"info","ts":"2025-01-22T21:13:21.537527Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":643,"took":"7.896446ms","hash":2243154605,"current-db-size-bytes":1945600,"current-db-size":"1.9 MB","current-db-size-in-use-bytes":1945600,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-01-22T21:13:21.537579Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2243154605,"revision":643,"compact-revision":-1}
	{"level":"warn","ts":"2025-01-22T21:14:26.827699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.16041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:14:26.828135Z","caller":"traceutil/trace.go:171","msg":"trace[439120649] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:935; }","duration":"104.653873ms","start":"2025-01-22T21:14:26.723460Z","end":"2025-01-22T21:14:26.828114Z","steps":["trace[439120649] 'range keys from in-memory index tree'  (duration: 104.040395ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:14:27.080523Z","caller":"traceutil/trace.go:171","msg":"trace[2028589692] linearizableReadLoop","detail":"{readStateIndex:1082; appliedIndex:1081; }","duration":"212.690626ms","start":"2025-01-22T21:14:26.867814Z","end":"2025-01-22T21:14:27.080505Z","steps":["trace[2028589692] 'read index received'  (duration: 212.500664ms)","trace[2028589692] 'applied index is now lower than readState.Index'  (duration: 189.285µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-22T21:14:27.081286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.445994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:14:27.081640Z","caller":"traceutil/trace.go:171","msg":"trace[1106653322] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:936; }","duration":"213.811164ms","start":"2025-01-22T21:14:26.867778Z","end":"2025-01-22T21:14:27.081589Z","steps":["trace[1106653322] 'agreement among raft nodes before linearized reading'  (duration: 213.416567ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:14:27.081683Z","caller":"traceutil/trace.go:171","msg":"trace[1876760479] transaction","detail":"{read_only:false; response_revision:936; number_of_response:1; }","duration":"247.523897ms","start":"2025-01-22T21:14:26.834141Z","end":"2025-01-22T21:14:27.081665Z","steps":["trace[1876760479] 'process raft request'  (duration: 246.230498ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:18:21.536506Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":883}
	{"level":"info","ts":"2025-01-22T21:18:21.539963Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":883,"took":"2.790141ms","hash":2082911694,"current-db-size-bytes":1945600,"current-db-size":"1.9 MB","current-db-size-in-use-bytes":1486848,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-01-22T21:18:21.540010Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2082911694,"revision":883,"compact-revision":643}
	{"level":"info","ts":"2025-01-22T21:23:21.550219Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1123}
	{"level":"info","ts":"2025-01-22T21:23:21.553630Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1123,"took":"3.094873ms","hash":615019102,"current-db-size-bytes":1945600,"current-db-size":"1.9 MB","current-db-size-in-use-bytes":1441792,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-01-22T21:23:21.553730Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":615019102,"revision":1123,"compact-revision":883}
	{"level":"info","ts":"2025-01-22T21:28:21.556379Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1363}
	{"level":"info","ts":"2025-01-22T21:28:21.559526Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1363,"took":"2.546261ms","hash":831427803,"current-db-size-bytes":1945600,"current-db-size":"1.9 MB","current-db-size-in-use-bytes":1400832,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-01-22T21:28:21.559734Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":831427803,"revision":1363,"compact-revision":1123}
	
	
	==> kernel <==
	 21:32:47 up 29 min,  0 users,  load average: 0.45, 0.24, 0.19
	Linux default-k8s-diff-port-061998 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [661ec50972f7e36b8d00213bb9c5e2c549541fdddcb278b499593c86ae6201ea] <==
	I0122 21:03:22.707226       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0122 21:03:22.707402       1 policy_source.go:240] refreshing policies
	E0122 21:03:22.708297       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0122 21:03:22.724774       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0122 21:03:22.741211       1 controller.go:615] quota admission added evaluator for: namespaces
	I0122 21:03:22.741632       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0122 21:03:22.741679       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0122 21:03:22.741946       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0122 21:03:22.742046       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0122 21:03:22.912889       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0122 21:03:23.545122       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0122 21:03:23.550505       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0122 21:03:23.550541       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0122 21:03:24.103794       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0122 21:03:24.143671       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0122 21:03:24.258749       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0122 21:03:24.266231       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.147]
	I0122 21:03:24.267315       1 controller.go:615] quota admission added evaluator for: endpoints
	I0122 21:03:24.272413       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0122 21:03:24.623780       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0122 21:03:25.233515       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0122 21:03:25.245644       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0122 21:03:25.253496       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0122 21:03:29.960640       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0122 21:03:30.175686       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9c1c5c46d9d8ebc8fdcb6cf2307b7f78b72ab2fc24bf1fea246e4edf991227d0] <==
	I0122 21:03:29.206959       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-061998"
	I0122 21:03:29.206977       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-061998"
	I0122 21:03:29.212243       1 shared_informer.go:320] Caches are synced for persistent volume
	I0122 21:03:29.223252       1 shared_informer.go:320] Caches are synced for garbage collector
	I0122 21:03:29.223280       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0122 21:03:29.223286       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0122 21:03:29.223582       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0122 21:03:29.223713       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0122 21:03:29.224111       1 shared_informer.go:320] Caches are synced for daemon sets
	I0122 21:03:29.337598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-061998"
	I0122 21:03:30.326331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="357.564015ms"
	I0122 21:03:30.372630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.260938ms"
	I0122 21:03:30.372741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="81.023µs"
	I0122 21:03:31.227554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.877µs"
	I0122 21:03:31.275320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.262µs"
	I0122 21:03:35.601037       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-061998"
	I0122 21:04:03.892279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.514648ms"
	I0122 21:04:03.892529       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="187.326µs"
	I0122 21:04:04.800023       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.820506ms"
	I0122 21:04:04.800391       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="228.775µs"
	I0122 21:10:23.696753       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-061998"
	I0122 21:15:28.765773       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-061998"
	I0122 21:20:33.512766       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-061998"
	I0122 21:25:40.157477       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-061998"
	I0122 21:30:46.400824       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-061998"
	
	
	==> kube-proxy [dbb9db6df1827e58db2e81dc13015a870461f7241cfc617e2e7da0cf7f6322e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0122 21:03:31.251532       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0122 21:03:31.311523       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.147"]
	E0122 21:03:31.311764       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0122 21:03:31.370151       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0122 21:03:31.370218       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0122 21:03:31.370245       1 server_linux.go:170] "Using iptables Proxier"
	I0122 21:03:31.373816       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0122 21:03:31.374850       1 server.go:497] "Version info" version="v1.32.1"
	I0122 21:03:31.374950       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 21:03:31.378838       1 config.go:199] "Starting service config controller"
	I0122 21:03:31.379415       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0122 21:03:31.379760       1 config.go:105] "Starting endpoint slice config controller"
	I0122 21:03:31.379844       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0122 21:03:31.382309       1 config.go:329] "Starting node config controller"
	I0122 21:03:31.382429       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0122 21:03:31.480345       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0122 21:03:31.480352       1 shared_informer.go:320] Caches are synced for service config
	I0122 21:03:31.482921       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [632fbcff689c294cffe080f0ad0eff32b4b67cf9ffccb3074cb4e8206868e057] <==
	W0122 21:03:22.675152       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0122 21:03:22.675486       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:22.675740       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0122 21:03:22.675828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:22.675955       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0122 21:03:22.676089       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:23.558926       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0122 21:03:23.559151       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:23.605345       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0122 21:03:23.605540       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:23.651988       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0122 21:03:23.652206       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:23.705484       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0122 21:03:23.705681       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:23.725371       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0122 21:03:23.725419       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:23.755003       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0122 21:03:23.755077       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:23.782996       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0122 21:03:23.783128       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:23.881538       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0122 21:03:23.881585       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:03:23.888974       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0122 21:03:23.889283       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0122 21:03:24.263304       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 22 21:28:25 default-k8s-diff-port-061998 kubelet[1232]: E0122 21:28:25.207775    1232 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 22 21:28:25 default-k8s-diff-port-061998 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 22 21:28:25 default-k8s-diff-port-061998 kubelet[1232]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 21:28:25 default-k8s-diff-port-061998 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 21:28:25 default-k8s-diff-port-061998 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 21:29:25 default-k8s-diff-port-061998 kubelet[1232]: E0122 21:29:25.209266    1232 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 22 21:29:25 default-k8s-diff-port-061998 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 22 21:29:25 default-k8s-diff-port-061998 kubelet[1232]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 21:29:25 default-k8s-diff-port-061998 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 21:29:25 default-k8s-diff-port-061998 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 21:30:25 default-k8s-diff-port-061998 kubelet[1232]: E0122 21:30:25.208561    1232 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 22 21:30:25 default-k8s-diff-port-061998 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 22 21:30:25 default-k8s-diff-port-061998 kubelet[1232]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 21:30:25 default-k8s-diff-port-061998 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 21:30:25 default-k8s-diff-port-061998 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 21:31:25 default-k8s-diff-port-061998 kubelet[1232]: E0122 21:31:25.209304    1232 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 22 21:31:25 default-k8s-diff-port-061998 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 22 21:31:25 default-k8s-diff-port-061998 kubelet[1232]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 21:31:25 default-k8s-diff-port-061998 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 21:31:25 default-k8s-diff-port-061998 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 21:32:25 default-k8s-diff-port-061998 kubelet[1232]: E0122 21:32:25.210578    1232 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 22 21:32:25 default-k8s-diff-port-061998 kubelet[1232]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 22 21:32:25 default-k8s-diff-port-061998 kubelet[1232]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 21:32:25 default-k8s-diff-port-061998 kubelet[1232]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 21:32:25 default-k8s-diff-port-061998 kubelet[1232]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-061998 -n default-k8s-diff-port-061998
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-061998 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestStartStop/group/default-k8s-diff-port/serial/FirstStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (1801.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1598.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-086882 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-086882 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m36.165992284s)

                                                
                                                
-- stdout --
	* [no-preload-086882] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-086882" primary control-plane node in "no-preload-086882" cluster
	* Restarting existing kvm2 VM for "no-preload-086882" ...
	* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-086882 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 21:03:35.183216  199863 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:03:35.183320  199863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:03:35.183329  199863 out.go:358] Setting ErrFile to fd 2...
	I0122 21:03:35.183333  199863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:03:35.183541  199863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 21:03:35.184068  199863 out.go:352] Setting JSON to false
	I0122 21:03:35.185008  199863 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9950,"bootTime":1737569865,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:03:35.185106  199863 start.go:139] virtualization: kvm guest
	I0122 21:03:35.187317  199863 out.go:177] * [no-preload-086882] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:03:35.188686  199863 notify.go:220] Checking for updates...
	I0122 21:03:35.188704  199863 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:03:35.190138  199863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:03:35.191462  199863 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 21:03:35.192608  199863 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:03:35.193852  199863 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:03:35.195233  199863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:03:35.196991  199863 config.go:182] Loaded profile config "no-preload-086882": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:03:35.197593  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:03:35.197653  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:35.214019  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0122 21:03:35.214560  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:35.215236  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:03:35.215292  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:35.215753  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:35.215957  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:03:35.216252  199863 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:03:35.216700  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:03:35.216781  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:35.232594  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0122 21:03:35.233234  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:35.233738  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:03:35.233762  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:35.234146  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:35.234320  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:03:35.270451  199863 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 21:03:35.271886  199863 start.go:297] selected driver: kvm2
	I0122 21:03:35.271900  199863 start.go:901] validating driver "kvm2" against &{Name:no-preload-086882 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-086882 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:03:35.272020  199863 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:03:35.272899  199863 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.273019  199863 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-150966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:03:35.288678  199863 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:03:35.289267  199863 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:03:35.289310  199863 cni.go:84] Creating CNI manager for ""
	I0122 21:03:35.289371  199863 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 21:03:35.289449  199863 start.go:340] cluster config:
	{Name:no-preload-086882 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-086882 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:03:35.289567  199863 iso.go:125] acquiring lock: {Name:mkc3bf0604e328871936621dd0e0cda10261a449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.292076  199863 out.go:177] * Starting "no-preload-086882" primary control-plane node in "no-preload-086882" cluster
	I0122 21:03:35.293414  199863 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0122 21:03:35.293623  199863 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882/config.json ...
	I0122 21:03:35.293700  199863 cache.go:107] acquiring lock: {Name:mka2650751f71d993171f4ad9b37c37cdeb31da1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.293697  199863 cache.go:107] acquiring lock: {Name:mkb341c1406e7105c3aa723a88cb23e7849aae99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.293807  199863 cache.go:115] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0122 21:03:35.293822  199863 cache.go:115] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0122 21:03:35.293829  199863 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.441µs
	I0122 21:03:35.293836  199863 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 149.664µs
	I0122 21:03:35.293852  199863 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0122 21:03:35.293855  199863 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0122 21:03:35.293871  199863 cache.go:107] acquiring lock: {Name:mka6f1c5c3bfadc03b2e5c14bb5c7002855b324b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.293879  199863 cache.go:107] acquiring lock: {Name:mk4e299b571e67473e8ea94279db59b36c4e77a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.293930  199863 cache.go:115] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0122 21:03:35.293933  199863 start.go:360] acquireMachinesLock for no-preload-086882: {Name:mkde076c0ff5ffaed1ac7d9ac4f697ecfb6e2cf2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:03:35.293944  199863 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 76.846µs
	I0122 21:03:35.293946  199863 cache.go:107] acquiring lock: {Name:mk08fbb3bc73abe1ed83a8a81f327988179b2d97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.293977  199863 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0122 21:03:35.293930  199863 cache.go:115] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0122 21:03:35.293993  199863 start.go:364] duration metric: took 41.378µs to acquireMachinesLock for "no-preload-086882"
	I0122 21:03:35.293952  199863 cache.go:107] acquiring lock: {Name:mk31e8e437ffeccaa7faf292310a2c7fba6ab049 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.294015  199863 start.go:96] Skipping create...Using existing machine configuration
	I0122 21:03:35.294023  199863 fix.go:54] fixHost starting: 
	I0122 21:03:35.294015  199863 cache.go:107] acquiring lock: {Name:mkb95818138bef9b02fdb78b0b6ea3ce6d53bc9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.294033  199863 cache.go:107] acquiring lock: {Name:mk489fe20dbf10ab4a13d1bc6eefa97efa1d9c93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:03:35.294081  199863 cache.go:115] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0122 21:03:35.294085  199863 cache.go:115] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0122 21:03:35.294094  199863 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 62.804µs
	I0122 21:03:35.294103  199863 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0122 21:03:35.293993  199863 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 116.746µs
	I0122 21:03:35.294117  199863 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0122 21:03:35.294002  199863 cache.go:115] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0122 21:03:35.294129  199863 cache.go:115] /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0122 21:03:35.294142  199863 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 246.14µs
	I0122 21:03:35.294151  199863 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0122 21:03:35.294130  199863 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 186.863µs
	I0122 21:03:35.294169  199863 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0122 21:03:35.294102  199863 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 94.559µs
	I0122 21:03:35.294219  199863 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0122 21:03:35.294259  199863 cache.go:87] Successfully saved all images to host disk.
	I0122 21:03:35.294538  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:03:35.294596  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:03:35.309194  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I0122 21:03:35.309556  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:03:35.310026  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:03:35.310047  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:03:35.310382  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:03:35.310581  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:03:35.310730  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetState
	I0122 21:03:35.312234  199863 fix.go:112] recreateIfNeeded on no-preload-086882: state=Stopped err=<nil>
	I0122 21:03:35.312272  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	W0122 21:03:35.312423  199863 fix.go:138] unexpected machine state, will restart: <nil>
	I0122 21:03:35.314353  199863 out.go:177] * Restarting existing kvm2 VM for "no-preload-086882" ...
	I0122 21:03:35.315627  199863 main.go:141] libmachine: (no-preload-086882) Calling .Start
	I0122 21:03:35.315802  199863 main.go:141] libmachine: (no-preload-086882) starting domain...
	I0122 21:03:35.315824  199863 main.go:141] libmachine: (no-preload-086882) ensuring networks are active...
	I0122 21:03:35.316547  199863 main.go:141] libmachine: (no-preload-086882) Ensuring network default is active
	I0122 21:03:35.316871  199863 main.go:141] libmachine: (no-preload-086882) Ensuring network mk-no-preload-086882 is active
	I0122 21:03:35.317250  199863 main.go:141] libmachine: (no-preload-086882) getting domain XML...
	I0122 21:03:35.318040  199863 main.go:141] libmachine: (no-preload-086882) creating domain...
	I0122 21:03:36.520809  199863 main.go:141] libmachine: (no-preload-086882) waiting for IP...
	I0122 21:03:36.521793  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:36.522240  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:36.522651  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:36.522238  199899 retry.go:31] will retry after 217.431322ms: waiting for domain to come up
	I0122 21:03:36.741883  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:36.742457  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:36.742517  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:36.742439  199899 retry.go:31] will retry after 292.070615ms: waiting for domain to come up
	I0122 21:03:37.035957  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:37.036552  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:37.036583  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:37.036496  199899 retry.go:31] will retry after 354.437031ms: waiting for domain to come up
	I0122 21:03:37.392120  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:37.392673  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:37.392700  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:37.392637  199899 retry.go:31] will retry after 461.834284ms: waiting for domain to come up
	I0122 21:03:37.856366  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:37.856844  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:37.856872  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:37.856820  199899 retry.go:31] will retry after 531.32828ms: waiting for domain to come up
	I0122 21:03:38.389280  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:38.389786  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:38.389809  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:38.389762  199899 retry.go:31] will retry after 816.841332ms: waiting for domain to come up
	I0122 21:03:39.208284  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:39.208799  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:39.208852  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:39.208756  199899 retry.go:31] will retry after 834.871953ms: waiting for domain to come up
	I0122 21:03:40.044851  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:40.045302  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:40.045340  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:40.045276  199899 retry.go:31] will retry after 989.523846ms: waiting for domain to come up
	I0122 21:03:41.036085  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:41.036556  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:41.036589  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:41.036512  199899 retry.go:31] will retry after 1.849192813s: waiting for domain to come up
	I0122 21:03:42.887050  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:42.887476  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:42.887501  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:42.887440  199899 retry.go:31] will retry after 1.915599649s: waiting for domain to come up
	I0122 21:03:44.804138  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:44.804673  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:44.804707  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:44.804616  199899 retry.go:31] will retry after 2.647309945s: waiting for domain to come up
	I0122 21:03:47.454983  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:47.455441  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:47.455479  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:47.455418  199899 retry.go:31] will retry after 2.211118741s: waiting for domain to come up
	I0122 21:03:49.669778  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:49.670353  199863 main.go:141] libmachine: (no-preload-086882) DBG | unable to find current IP address of domain no-preload-086882 in network mk-no-preload-086882
	I0122 21:03:49.670377  199863 main.go:141] libmachine: (no-preload-086882) DBG | I0122 21:03:49.670311  199899 retry.go:31] will retry after 3.22638341s: waiting for domain to come up
	I0122 21:03:52.899143  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:52.899739  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has current primary IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:52.899769  199863 main.go:141] libmachine: (no-preload-086882) found domain IP: 192.168.39.177
	I0122 21:03:52.899782  199863 main.go:141] libmachine: (no-preload-086882) reserving static IP address...
	I0122 21:03:52.900262  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "no-preload-086882", mac: "52:54:00:ef:6b:5d", ip: "192.168.39.177"} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:52.900288  199863 main.go:141] libmachine: (no-preload-086882) DBG | skip adding static IP to network mk-no-preload-086882 - found existing host DHCP lease matching {name: "no-preload-086882", mac: "52:54:00:ef:6b:5d", ip: "192.168.39.177"}
	I0122 21:03:52.900305  199863 main.go:141] libmachine: (no-preload-086882) reserved static IP address 192.168.39.177 for domain no-preload-086882
	I0122 21:03:52.900318  199863 main.go:141] libmachine: (no-preload-086882) waiting for SSH...
	I0122 21:03:52.900330  199863 main.go:141] libmachine: (no-preload-086882) DBG | Getting to WaitForSSH function...
	I0122 21:03:52.902406  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:52.902785  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:52.902820  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:52.902893  199863 main.go:141] libmachine: (no-preload-086882) DBG | Using SSH client type: external
	I0122 21:03:52.902932  199863 main.go:141] libmachine: (no-preload-086882) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa (-rw-------)
	I0122 21:03:52.902974  199863 main.go:141] libmachine: (no-preload-086882) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:03:52.902992  199863 main.go:141] libmachine: (no-preload-086882) DBG | About to run SSH command:
	I0122 21:03:52.903001  199863 main.go:141] libmachine: (no-preload-086882) DBG | exit 0
	I0122 21:03:53.025788  199863 main.go:141] libmachine: (no-preload-086882) DBG | SSH cmd err, output: <nil>: 
	I0122 21:03:53.026248  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetConfigRaw
	I0122 21:03:53.026894  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetIP
	I0122 21:03:53.029082  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.029425  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.029454  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.029658  199863 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882/config.json ...
	I0122 21:03:53.029871  199863 machine.go:93] provisionDockerMachine start ...
	I0122 21:03:53.029891  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:03:53.030104  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:03:53.032030  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.032331  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.032364  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.032553  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:03:53.032741  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.032919  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.033050  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:03:53.033230  199863 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:53.033415  199863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0122 21:03:53.033432  199863 main.go:141] libmachine: About to run SSH command:
	hostname
	I0122 21:03:53.134117  199863 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0122 21:03:53.134153  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetMachineName
	I0122 21:03:53.134414  199863 buildroot.go:166] provisioning hostname "no-preload-086882"
	I0122 21:03:53.134449  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetMachineName
	I0122 21:03:53.134637  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:03:53.137497  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.137845  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.137872  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.138113  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:03:53.138341  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.138514  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.138658  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:03:53.138810  199863 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:53.138983  199863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0122 21:03:53.138996  199863 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-086882 && echo "no-preload-086882" | sudo tee /etc/hostname
	I0122 21:03:53.251267  199863 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-086882
	
	I0122 21:03:53.251300  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:03:53.253945  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.254328  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.254360  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.254520  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:03:53.254699  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.254828  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.254921  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:03:53.255071  199863 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:53.255282  199863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0122 21:03:53.255301  199863 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-086882' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-086882/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-086882' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:03:53.366501  199863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:03:53.366533  199863 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-150966/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-150966/.minikube}
	I0122 21:03:53.366555  199863 buildroot.go:174] setting up certificates
	I0122 21:03:53.366566  199863 provision.go:84] configureAuth start
	I0122 21:03:53.366577  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetMachineName
	I0122 21:03:53.366836  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetIP
	I0122 21:03:53.369366  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.369742  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.369772  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.369884  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:03:53.372218  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.372577  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.372625  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.372757  199863 provision.go:143] copyHostCerts
	I0122 21:03:53.372809  199863 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem, removing ...
	I0122 21:03:53.372819  199863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem
	I0122 21:03:53.372880  199863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem (1078 bytes)
	I0122 21:03:53.372968  199863 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem, removing ...
	I0122 21:03:53.372976  199863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem
	I0122 21:03:53.373000  199863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem (1123 bytes)
	I0122 21:03:53.373052  199863 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem, removing ...
	I0122 21:03:53.373059  199863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem
	I0122 21:03:53.373078  199863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem (1675 bytes)
	I0122 21:03:53.373125  199863 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem org=jenkins.no-preload-086882 san=[127.0.0.1 192.168.39.177 localhost minikube no-preload-086882]
	I0122 21:03:53.484292  199863 provision.go:177] copyRemoteCerts
	I0122 21:03:53.484350  199863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:03:53.484373  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:03:53.487079  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.487451  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.487486  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.487673  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:03:53.487837  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.488015  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:03:53.488159  199863 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa Username:docker}
	I0122 21:03:53.568421  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 21:03:53.591424  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0122 21:03:53.617038  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:03:53.641143  199863 provision.go:87] duration metric: took 274.558732ms to configureAuth
	I0122 21:03:53.641183  199863 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:03:53.641395  199863 config.go:182] Loaded profile config "no-preload-086882": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:03:53.641413  199863 machine.go:96] duration metric: took 611.527448ms to provisionDockerMachine
	I0122 21:03:53.641425  199863 start.go:293] postStartSetup for "no-preload-086882" (driver="kvm2")
	I0122 21:03:53.641444  199863 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:03:53.641478  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:03:53.641761  199863 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:03:53.641801  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:03:53.644488  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.644879  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.644914  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.645061  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:03:53.645233  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.645398  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:03:53.645564  199863 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa Username:docker}
	I0122 21:03:53.723951  199863 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:03:53.727861  199863 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:03:53.727891  199863 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/addons for local assets ...
	I0122 21:03:53.727976  199863 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/files for local assets ...
	I0122 21:03:53.728084  199863 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem -> 1582712.pem in /etc/ssl/certs
	I0122 21:03:53.728203  199863 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:03:53.737198  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem --> /etc/ssl/certs/1582712.pem (1708 bytes)
	I0122 21:03:53.759710  199863 start.go:296] duration metric: took 118.265562ms for postStartSetup
	I0122 21:03:53.759749  199863 fix.go:56] duration metric: took 18.465727199s for fixHost
	I0122 21:03:53.759769  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:03:53.762613  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.762953  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.762984  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.763137  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:03:53.763320  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.763465  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.763610  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:03:53.763739  199863 main.go:141] libmachine: Using SSH client type: native
	I0122 21:03:53.763899  199863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0122 21:03:53.763908  199863 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:03:53.866695  199863 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737579833.840320800
	
	I0122 21:03:53.866723  199863 fix.go:216] guest clock: 1737579833.840320800
	I0122 21:03:53.866732  199863 fix.go:229] Guest: 2025-01-22 21:03:53.8403208 +0000 UTC Remote: 2025-01-22 21:03:53.759752495 +0000 UTC m=+18.614069464 (delta=80.568305ms)
	I0122 21:03:53.866754  199863 fix.go:200] guest clock delta is within tolerance: 80.568305ms
	I0122 21:03:53.866759  199863 start.go:83] releasing machines lock for "no-preload-086882", held for 18.572757382s
	I0122 21:03:53.866794  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:03:53.867098  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetIP
	I0122 21:03:53.869842  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.870331  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.870364  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.870542  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:03:53.871024  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:03:53.871240  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:03:53.871332  199863 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:03:53.871386  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:03:53.871497  199863 ssh_runner.go:195] Run: cat /version.json
	I0122 21:03:53.871547  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:03:53.874452  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.874620  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.874864  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.874893  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.875082  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:03:53.875149  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:53.875178  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:53.875231  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.875326  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:03:53.875400  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:03:53.875462  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:03:53.875588  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:03:53.875585  199863 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa Username:docker}
	I0122 21:03:53.875697  199863 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa Username:docker}
	I0122 21:03:53.950772  199863 ssh_runner.go:195] Run: systemctl --version
	I0122 21:03:53.976196  199863 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:03:53.981484  199863 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:03:53.981547  199863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:03:53.996747  199863 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:03:53.996765  199863 start.go:495] detecting cgroup driver to use...
	I0122 21:03:53.996822  199863 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 21:03:54.027814  199863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 21:03:54.041019  199863 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:03:54.041086  199863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:03:54.054928  199863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:03:54.068097  199863 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:03:54.194332  199863 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:03:54.367560  199863 docker.go:233] disabling docker service ...
	I0122 21:03:54.367619  199863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:03:54.381512  199863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:03:54.395663  199863 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:03:54.504829  199863 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:03:54.617189  199863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:03:54.632677  199863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:03:54.650841  199863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0122 21:03:54.662627  199863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 21:03:54.673297  199863 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 21:03:54.673370  199863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 21:03:54.687529  199863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 21:03:54.698447  199863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 21:03:54.709225  199863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 21:03:54.720727  199863 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:03:54.731500  199863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 21:03:54.742547  199863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0122 21:03:54.753461  199863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0122 21:03:54.764015  199863 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:03:54.773858  199863 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:03:54.773936  199863 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:03:54.789830  199863 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:03:54.801182  199863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:03:54.948142  199863 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 21:03:54.981787  199863 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0122 21:03:54.981865  199863 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 21:03:54.988348  199863 retry.go:31] will retry after 1.171639002s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0122 21:03:56.161141  199863 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 21:03:56.166319  199863 start.go:563] Will wait 60s for crictl version
	I0122 21:03:56.166392  199863 ssh_runner.go:195] Run: which crictl
	I0122 21:03:56.170473  199863 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:03:56.207527  199863 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0122 21:03:56.207608  199863 ssh_runner.go:195] Run: containerd --version
	I0122 21:03:56.234966  199863 ssh_runner.go:195] Run: containerd --version
	I0122 21:03:56.261168  199863 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0122 21:03:56.262556  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetIP
	I0122 21:03:56.265469  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:56.265839  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:03:56.265871  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:03:56.266093  199863 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0122 21:03:56.270442  199863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:03:56.283981  199863 kubeadm.go:883] updating cluster {Name:no-preload-086882 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-086882 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:03:56.284097  199863 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0122 21:03:56.284152  199863 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:03:56.317974  199863 containerd.go:627] all images are preloaded for containerd runtime.
	I0122 21:03:56.318002  199863 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:03:56.318012  199863 kubeadm.go:934] updating node { 192.168.39.177 8443 v1.32.1 containerd true true} ...
	I0122 21:03:56.318112  199863 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-086882 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-086882 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0122 21:03:56.318171  199863 ssh_runner.go:195] Run: sudo crictl info
	I0122 21:03:56.354753  199863 cni.go:84] Creating CNI manager for ""
	I0122 21:03:56.354775  199863 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 21:03:56.354790  199863 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:03:56.354810  199863 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-086882 NodeName:no-preload-086882 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:03:56.354941  199863 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-086882"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.177"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:03:56.355004  199863 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:03:56.365483  199863 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:03:56.365555  199863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:03:56.375302  199863 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0122 21:03:56.393855  199863 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:03:56.410301  199863 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
	I0122 21:03:56.428436  199863 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0122 21:03:56.432238  199863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:03:56.445236  199863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:03:56.553149  199863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:03:56.571634  199863 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882 for IP: 192.168.39.177
	I0122 21:03:56.571662  199863 certs.go:194] generating shared ca certs ...
	I0122 21:03:56.571705  199863 certs.go:226] acquiring lock for ca certs: {Name:mk53e9e3df6ffb3fa8285a86887df441ff5826d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:56.571929  199863 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.key
	I0122 21:03:56.571995  199863 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.key
	I0122 21:03:56.572021  199863 certs.go:256] generating profile certs ...
	I0122 21:03:56.572161  199863 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882/client.key
	I0122 21:03:56.572259  199863 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882/apiserver.key.d060b153
	I0122 21:03:56.572328  199863 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882/proxy-client.key
	I0122 21:03:56.572484  199863 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271.pem (1338 bytes)
	W0122 21:03:56.572533  199863 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271_empty.pem, impossibly tiny 0 bytes
	I0122 21:03:56.572547  199863 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:03:56.572582  199863 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem (1078 bytes)
	I0122 21:03:56.572618  199863 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:03:56.572659  199863 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem (1675 bytes)
	I0122 21:03:56.572718  199863 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem (1708 bytes)
	I0122 21:03:56.573586  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:03:56.614565  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0122 21:03:56.645009  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:03:56.675043  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0122 21:03:56.706300  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0122 21:03:56.736310  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:03:56.766022  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:03:56.794939  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/no-preload-086882/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0122 21:03:56.823028  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271.pem --> /usr/share/ca-certificates/158271.pem (1338 bytes)
	I0122 21:03:56.846781  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem --> /usr/share/ca-certificates/1582712.pem (1708 bytes)
	I0122 21:03:56.870199  199863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:03:56.894167  199863 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:03:56.911928  199863 ssh_runner.go:195] Run: openssl version
	I0122 21:03:56.918241  199863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/158271.pem && ln -fs /usr/share/ca-certificates/158271.pem /etc/ssl/certs/158271.pem"
	I0122 21:03:56.928740  199863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/158271.pem
	I0122 21:03:56.933477  199863 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:06 /usr/share/ca-certificates/158271.pem
	I0122 21:03:56.933534  199863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/158271.pem
	I0122 21:03:56.941025  199863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/158271.pem /etc/ssl/certs/51391683.0"
	I0122 21:03:56.951636  199863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582712.pem && ln -fs /usr/share/ca-certificates/1582712.pem /etc/ssl/certs/1582712.pem"
	I0122 21:03:56.966308  199863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582712.pem
	I0122 21:03:56.971241  199863 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:06 /usr/share/ca-certificates/1582712.pem
	I0122 21:03:56.971314  199863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582712.pem
	I0122 21:03:56.977266  199863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1582712.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:03:56.988636  199863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:03:56.999529  199863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:03:57.004324  199863 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:03:57.004387  199863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:03:57.010182  199863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:03:57.021267  199863 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:03:57.025721  199863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0122 21:03:57.031432  199863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0122 21:03:57.037304  199863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0122 21:03:57.043496  199863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0122 21:03:57.049165  199863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0122 21:03:57.054813  199863 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0122 21:03:57.060373  199863 kubeadm.go:392] StartCluster: {Name:no-preload-086882 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-086882 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:03:57.060468  199863 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0122 21:03:57.060533  199863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:03:57.100049  199863 cri.go:89] found id: "6dcf9dda02ab8c8101bb04d8d31dba7d5fd350184590a71c402b4a6086a85508"
	I0122 21:03:57.100078  199863 cri.go:89] found id: "25d64c7b0d2a777ad2f14267f604b0a0206b5e31538044488f937932dca0d43b"
	I0122 21:03:57.100083  199863 cri.go:89] found id: "3a498e73e33ec782f87c1d1de97c5fe73453ad1a083eb034edaff492fc5a2d2a"
	I0122 21:03:57.100086  199863 cri.go:89] found id: "70d1ab48954023a014de4b1c6949e5d8cd5e963dc5ac5f4f0f7867a68736324b"
	I0122 21:03:57.100089  199863 cri.go:89] found id: "b0dac36af8f9f96245aa960379daeabfbd08a883e97367b154d33473ba88ae3c"
	I0122 21:03:57.100091  199863 cri.go:89] found id: "625ed53d6e309660f674e4d31a36a02e7058cd85a99a50b70876c472985b1464"
	I0122 21:03:57.100094  199863 cri.go:89] found id: "bdc15113f4228b5e36a96366994267fb18556ec35405efcef939225ae9782fa6"
	I0122 21:03:57.100096  199863 cri.go:89] found id: ""
	I0122 21:03:57.100148  199863 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0122 21:03:57.117983  199863 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-22T21:03:57Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0122 21:03:57.118078  199863 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:03:57.131214  199863 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0122 21:03:57.131231  199863 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0122 21:03:57.131279  199863 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0122 21:03:57.142710  199863 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0122 21:03:57.143604  199863 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-086882" does not appear in /home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 21:03:57.144081  199863 kubeconfig.go:62] /home/jenkins/minikube-integration/20288-150966/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-086882" cluster setting kubeconfig missing "no-preload-086882" context setting]
	I0122 21:03:57.144643  199863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/kubeconfig: {Name:mk70478f45a79a3b41e7b46029f97939b1511ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:03:57.146225  199863 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0122 21:03:57.159718  199863 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.177
	I0122 21:03:57.159757  199863 kubeadm.go:1160] stopping kube-system containers ...
	I0122 21:03:57.159771  199863 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0122 21:03:57.159836  199863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:03:57.194964  199863 cri.go:89] found id: "6dcf9dda02ab8c8101bb04d8d31dba7d5fd350184590a71c402b4a6086a85508"
	I0122 21:03:57.194995  199863 cri.go:89] found id: "25d64c7b0d2a777ad2f14267f604b0a0206b5e31538044488f937932dca0d43b"
	I0122 21:03:57.195002  199863 cri.go:89] found id: "3a498e73e33ec782f87c1d1de97c5fe73453ad1a083eb034edaff492fc5a2d2a"
	I0122 21:03:57.195008  199863 cri.go:89] found id: "70d1ab48954023a014de4b1c6949e5d8cd5e963dc5ac5f4f0f7867a68736324b"
	I0122 21:03:57.195014  199863 cri.go:89] found id: "b0dac36af8f9f96245aa960379daeabfbd08a883e97367b154d33473ba88ae3c"
	I0122 21:03:57.195019  199863 cri.go:89] found id: "625ed53d6e309660f674e4d31a36a02e7058cd85a99a50b70876c472985b1464"
	I0122 21:03:57.195025  199863 cri.go:89] found id: "bdc15113f4228b5e36a96366994267fb18556ec35405efcef939225ae9782fa6"
	I0122 21:03:57.195030  199863 cri.go:89] found id: ""
	I0122 21:03:57.195038  199863 cri.go:252] Stopping containers: [6dcf9dda02ab8c8101bb04d8d31dba7d5fd350184590a71c402b4a6086a85508 25d64c7b0d2a777ad2f14267f604b0a0206b5e31538044488f937932dca0d43b 3a498e73e33ec782f87c1d1de97c5fe73453ad1a083eb034edaff492fc5a2d2a 70d1ab48954023a014de4b1c6949e5d8cd5e963dc5ac5f4f0f7867a68736324b b0dac36af8f9f96245aa960379daeabfbd08a883e97367b154d33473ba88ae3c 625ed53d6e309660f674e4d31a36a02e7058cd85a99a50b70876c472985b1464 bdc15113f4228b5e36a96366994267fb18556ec35405efcef939225ae9782fa6]
	I0122 21:03:57.195114  199863 ssh_runner.go:195] Run: which crictl
	I0122 21:03:57.199153  199863 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 6dcf9dda02ab8c8101bb04d8d31dba7d5fd350184590a71c402b4a6086a85508 25d64c7b0d2a777ad2f14267f604b0a0206b5e31538044488f937932dca0d43b 3a498e73e33ec782f87c1d1de97c5fe73453ad1a083eb034edaff492fc5a2d2a 70d1ab48954023a014de4b1c6949e5d8cd5e963dc5ac5f4f0f7867a68736324b b0dac36af8f9f96245aa960379daeabfbd08a883e97367b154d33473ba88ae3c 625ed53d6e309660f674e4d31a36a02e7058cd85a99a50b70876c472985b1464 bdc15113f4228b5e36a96366994267fb18556ec35405efcef939225ae9782fa6
	I0122 21:03:57.239246  199863 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0122 21:03:57.255299  199863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:03:57.264486  199863 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:03:57.264511  199863 kubeadm.go:157] found existing configuration files:
	
	I0122 21:03:57.264566  199863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:03:57.274156  199863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:03:57.274210  199863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:03:57.283843  199863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:03:57.292763  199863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:03:57.292820  199863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:03:57.305382  199863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:03:57.316464  199863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:03:57.316515  199863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:03:57.325544  199863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:03:57.334189  199863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:03:57.334249  199863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:03:57.343349  199863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:03:57.352864  199863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:57.475081  199863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:58.603422  199863 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.12829516s)
	I0122 21:03:58.603463  199863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:58.805066  199863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:58.883153  199863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:03:58.972204  199863 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:03:58.972282  199863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:03:59.472681  199863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:03:59.972657  199863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:03:59.995220  199863 api_server.go:72] duration metric: took 1.023013985s to wait for apiserver process to appear ...
	I0122 21:03:59.995255  199863 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:03:59.995288  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:03:59.995877  199863 api_server.go:269] stopped: https://192.168.39.177:8443/healthz: Get "https://192.168.39.177:8443/healthz": dial tcp 192.168.39.177:8443: connect: connection refused
	I0122 21:04:00.496156  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:02.920731  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:04:02.920768  199863 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:04:02.920786  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:02.958703  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0122 21:04:02.958741  199863 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0122 21:04:02.996031  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:03.024954  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:04:03.024999  199863 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:04:03.495592  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:03.500756  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:04:03.500790  199863 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:04:03.995409  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:04.000849  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:04:04.000880  199863 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:04:04.495411  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:04.501160  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:04:04.501197  199863 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:04:04.995843  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:05.003949  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:04:05.003988  199863 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:04:05.496068  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:05.500878  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:04:05.500912  199863 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:04:05.995584  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:06.000371  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0122 21:04:06.000405  199863 api_server.go:103] status: https://192.168.39.177:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0122 21:04:06.496064  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:04:06.500606  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I0122 21:04:06.507133  199863 api_server.go:141] control plane version: v1.32.1
	I0122 21:04:06.507164  199863 api_server.go:131] duration metric: took 6.511901103s to wait for apiserver health ...
	I0122 21:04:06.507177  199863 cni.go:84] Creating CNI manager for ""
	I0122 21:04:06.507185  199863 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 21:04:06.509050  199863 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:04:06.510529  199863 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:04:06.520754  199863 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:04:06.536838  199863 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:04:06.547104  199863 system_pods.go:59] 8 kube-system pods found
	I0122 21:04:06.547156  199863 system_pods.go:61] "coredns-668d6bf9bc-z6n66" [2796e907-7512-4b25-9d14-bfca0e688a8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0122 21:04:06.547169  199863 system_pods.go:61] "etcd-no-preload-086882" [035f4679-1f1d-4686-8b03-e7f34eccd0d7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0122 21:04:06.547180  199863 system_pods.go:61] "kube-apiserver-no-preload-086882" [b8321628-310c-45be-8472-273259121b8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0122 21:04:06.547194  199863 system_pods.go:61] "kube-controller-manager-no-preload-086882" [ff925911-da27-4f36-980b-d8b27fd368fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0122 21:04:06.547202  199863 system_pods.go:61] "kube-proxy-7jqj5" [6d11d71a-6c4f-4c19-89e3-cf8197010894] Running
	I0122 21:04:06.547214  199863 system_pods.go:61] "kube-scheduler-no-preload-086882" [d81ba6a6-fae8-4e42-b478-05ff3a936f78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0122 21:04:06.547225  199863 system_pods.go:61] "metrics-server-f79f97bbb-vjgq4" [e8132e4f-cd9c-4320-acb5-6b815bc01da5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:04:06.547233  199863 system_pods.go:61] "storage-provisioner" [2f609b9b-bd1a-4e69-b70e-4aaa9ecf39c7] Running
	I0122 21:04:06.547243  199863 system_pods.go:74] duration metric: took 10.388034ms to wait for pod list to return data ...
	I0122 21:04:06.547256  199863 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:04:06.550134  199863 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:04:06.550159  199863 node_conditions.go:123] node cpu capacity is 2
	I0122 21:04:06.550171  199863 node_conditions.go:105] duration metric: took 2.906065ms to run NodePressure ...
	I0122 21:04:06.550192  199863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0122 21:04:06.808712  199863 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0122 21:04:06.812538  199863 kubeadm.go:739] kubelet initialised
	I0122 21:04:06.812560  199863 kubeadm.go:740] duration metric: took 3.819418ms waiting for restarted kubelet to initialise ...
	I0122 21:04:06.812571  199863 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:04:06.817371  199863 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-z6n66" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:08.822741  199863 pod_ready.go:103] pod "coredns-668d6bf9bc-z6n66" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:10.823471  199863 pod_ready.go:93] pod "coredns-668d6bf9bc-z6n66" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:10.823495  199863 pod_ready.go:82] duration metric: took 4.006081845s for pod "coredns-668d6bf9bc-z6n66" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:10.823507  199863 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:10.833222  199863 pod_ready.go:93] pod "etcd-no-preload-086882" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:10.833248  199863 pod_ready.go:82] duration metric: took 9.735188ms for pod "etcd-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:10.833258  199863 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:12.839075  199863 pod_ready.go:103] pod "kube-apiserver-no-preload-086882" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:14.840343  199863 pod_ready.go:103] pod "kube-apiserver-no-preload-086882" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:17.338706  199863 pod_ready.go:103] pod "kube-apiserver-no-preload-086882" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:17.838928  199863 pod_ready.go:93] pod "kube-apiserver-no-preload-086882" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:17.838956  199863 pod_ready.go:82] duration metric: took 7.005684466s for pod "kube-apiserver-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:17.838966  199863 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:17.844848  199863 pod_ready.go:93] pod "kube-controller-manager-no-preload-086882" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:17.844876  199863 pod_ready.go:82] duration metric: took 5.902931ms for pod "kube-controller-manager-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:17.844889  199863 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7jqj5" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:17.851190  199863 pod_ready.go:93] pod "kube-proxy-7jqj5" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:17.851215  199863 pod_ready.go:82] duration metric: took 6.317978ms for pod "kube-proxy-7jqj5" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:17.851226  199863 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:17.855905  199863 pod_ready.go:93] pod "kube-scheduler-no-preload-086882" in "kube-system" namespace has status "Ready":"True"
	I0122 21:04:17.855928  199863 pod_ready.go:82] duration metric: took 4.695314ms for pod "kube-scheduler-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:17.855936  199863 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace to be "Ready" ...
	I0122 21:04:19.861920  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:21.862152  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:24.362138  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:26.362807  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:28.363177  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:30.862958  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:33.361934  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:35.362796  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:37.362950  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:39.864635  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:42.362455  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:44.363894  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:46.862464  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:49.362857  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:51.861506  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:53.862298  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:55.862990  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:04:58.362573  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:00.863464  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:03.363372  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:05.364376  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:07.863035  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:10.362190  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:12.363469  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:14.863299  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:16.863718  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:19.363546  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:21.863274  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:24.363017  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:26.861661  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:28.862769  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:31.361662  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:33.362336  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:35.363915  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:37.862451  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:40.362830  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:42.863438  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:44.867407  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:47.363385  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:49.862230  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:51.863112  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:54.361873  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:56.363625  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:05:58.861887  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:01.365000  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:03.861912  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:05.863215  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:08.362490  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:10.362755  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:12.362925  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:14.862405  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:16.863396  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:19.362483  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:21.362730  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:23.362845  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:25.863379  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:28.361883  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:30.362613  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:32.861651  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:34.863105  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:37.362116  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:39.363217  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:41.863453  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:44.362399  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:46.362720  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:48.861792  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:50.862945  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:53.361560  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:55.362238  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:57.362725  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:06:59.861405  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:01.861909  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:03.862701  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:06.362177  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:08.862947  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:11.363033  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:13.862328  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:16.362881  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:18.863297  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:21.365737  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:23.863617  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:26.362887  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:28.862430  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:30.863285  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:32.863394  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:35.362726  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:37.862298  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:39.864052  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:42.361384  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:44.362733  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:46.861919  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:48.862360  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:50.862567  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:53.362180  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:55.362695  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:07:57.864978  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:00.362686  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:02.863699  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:05.363214  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:07.862468  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:10.363351  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:12.862329  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:14.862542  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:16.862831  199863 pod_ready.go:103] pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:17.856414  199863 pod_ready.go:82] duration metric: took 4m0.000456862s for pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace to be "Ready" ...
	E0122 21:08:17.856443  199863 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-vjgq4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0122 21:08:17.856461  199863 pod_ready.go:39] duration metric: took 4m11.043877916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:08:17.856489  199863 kubeadm.go:597] duration metric: took 4m20.725252597s to restartPrimaryControlPlane
	W0122 21:08:17.856549  199863 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0122 21:08:17.856574  199863 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0122 21:08:19.436182  199863 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.579575295s)
	I0122 21:08:19.436250  199863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:08:19.452076  199863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:08:19.467480  199863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:08:19.480984  199863 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:08:19.481007  199863 kubeadm.go:157] found existing configuration files:
	
	I0122 21:08:19.481058  199863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:08:19.489932  199863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:08:19.490007  199863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:08:19.499086  199863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:08:19.507803  199863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:08:19.507865  199863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:08:19.517066  199863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:08:19.526378  199863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:08:19.526439  199863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:08:19.535604  199863 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:08:19.544387  199863 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:08:19.544453  199863 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:08:19.553543  199863 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:08:19.603826  199863 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0122 21:08:19.603898  199863 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:08:19.703160  199863 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:08:19.703323  199863 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:08:19.703493  199863 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0122 21:08:19.709097  199863 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:08:19.712322  199863 out.go:235]   - Generating certificates and keys ...
	I0122 21:08:19.712419  199863 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:08:19.712476  199863 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:08:19.712543  199863 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0122 21:08:19.712651  199863 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0122 21:08:19.712756  199863 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0122 21:08:19.712807  199863 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0122 21:08:19.712866  199863 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0122 21:08:19.712919  199863 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0122 21:08:19.712996  199863 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0122 21:08:19.713078  199863 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0122 21:08:19.713114  199863 kubeadm.go:310] [certs] Using the existing "sa" key
	I0122 21:08:19.713180  199863 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:08:19.807908  199863 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:08:19.881901  199863 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0122 21:08:19.980622  199863 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:08:20.395401  199863 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:08:20.720273  199863 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:08:20.720765  199863 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:08:20.725251  199863 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:08:20.727167  199863 out.go:235]   - Booting up control plane ...
	I0122 21:08:20.727280  199863 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:08:20.727386  199863 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:08:20.727755  199863 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:08:20.748224  199863 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:08:20.755044  199863 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:08:20.755133  199863 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:08:20.895007  199863 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0122 21:08:20.895175  199863 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0122 21:08:21.396252  199863 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.658629ms
	I0122 21:08:21.396374  199863 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0122 21:08:26.398071  199863 kubeadm.go:310] [api-check] The API server is healthy after 5.00164466s
	I0122 21:08:26.419052  199863 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 21:08:26.447145  199863 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 21:08:26.472575  199863 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0122 21:08:26.472843  199863 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-086882 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 21:08:26.485360  199863 kubeadm.go:310] [bootstrap-token] Using token: 8jfaa1.1dzi7h4eaphb0si2
	I0122 21:08:26.487224  199863 out.go:235]   - Configuring RBAC rules ...
	I0122 21:08:26.487408  199863 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 21:08:26.494945  199863 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 21:08:26.515036  199863 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 21:08:26.519821  199863 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 21:08:26.523730  199863 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 21:08:26.529167  199863 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 21:08:26.805607  199863 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 21:08:27.234895  199863 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0122 21:08:27.806322  199863 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0122 21:08:27.807772  199863 kubeadm.go:310] 
	I0122 21:08:27.807852  199863 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0122 21:08:27.807863  199863 kubeadm.go:310] 
	I0122 21:08:27.808047  199863 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0122 21:08:27.808072  199863 kubeadm.go:310] 
	I0122 21:08:27.808113  199863 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0122 21:08:27.808189  199863 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 21:08:27.808284  199863 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 21:08:27.808312  199863 kubeadm.go:310] 
	I0122 21:08:27.808409  199863 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0122 21:08:27.808418  199863 kubeadm.go:310] 
	I0122 21:08:27.808477  199863 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 21:08:27.808487  199863 kubeadm.go:310] 
	I0122 21:08:27.808557  199863 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0122 21:08:27.808658  199863 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 21:08:27.808748  199863 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 21:08:27.808756  199863 kubeadm.go:310] 
	I0122 21:08:27.808902  199863 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0122 21:08:27.809029  199863 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0122 21:08:27.809056  199863 kubeadm.go:310] 
	I0122 21:08:27.809161  199863 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8jfaa1.1dzi7h4eaphb0si2 \
	I0122 21:08:27.809313  199863 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88af174ababab22d3fd32c76a81e6e1b2f6ebf2a7a258215c191241a8730421a \
	I0122 21:08:27.809352  199863 kubeadm.go:310] 	--control-plane 
	I0122 21:08:27.809361  199863 kubeadm.go:310] 
	I0122 21:08:27.809481  199863 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0122 21:08:27.809497  199863 kubeadm.go:310] 
	I0122 21:08:27.809598  199863 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8jfaa1.1dzi7h4eaphb0si2 \
	I0122 21:08:27.809725  199863 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88af174ababab22d3fd32c76a81e6e1b2f6ebf2a7a258215c191241a8730421a 
	I0122 21:08:27.810425  199863 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:08:27.810463  199863 cni.go:84] Creating CNI manager for ""
	I0122 21:08:27.810478  199863 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 21:08:27.812302  199863 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:08:27.813595  199863 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:08:27.825267  199863 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:08:27.843362  199863 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:08:27.843452  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:27.843456  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-086882 minikube.k8s.io/updated_at=2025_01_22T21_08_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4 minikube.k8s.io/name=no-preload-086882 minikube.k8s.io/primary=true
	I0122 21:08:27.868082  199863 ops.go:34] apiserver oom_adj: -16
	I0122 21:08:28.039208  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:28.540132  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:29.040380  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:29.540134  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:30.039393  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:30.539954  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:31.040147  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:31.539614  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:32.039853  199863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:08:32.135724  199863 kubeadm.go:1113] duration metric: took 4.292349615s to wait for elevateKubeSystemPrivileges
	I0122 21:08:32.135767  199863 kubeadm.go:394] duration metric: took 4m35.075399095s to StartCluster
	I0122 21:08:32.135788  199863 settings.go:142] acquiring lock: {Name:mkfbfc304d1e9b2b80529e33af6a426e89d118a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:08:32.135878  199863 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 21:08:32.138080  199863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/kubeconfig: {Name:mk70478f45a79a3b41e7b46029f97939b1511ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:08:32.138327  199863 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0122 21:08:32.138432  199863 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:08:32.138553  199863 addons.go:69] Setting storage-provisioner=true in profile "no-preload-086882"
	I0122 21:08:32.138546  199863 config.go:182] Loaded profile config "no-preload-086882": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:08:32.138574  199863 addons.go:69] Setting dashboard=true in profile "no-preload-086882"
	I0122 21:08:32.138591  199863 addons.go:69] Setting metrics-server=true in profile "no-preload-086882"
	I0122 21:08:32.138597  199863 addons.go:69] Setting default-storageclass=true in profile "no-preload-086882"
	I0122 21:08:32.138597  199863 addons.go:238] Setting addon dashboard=true in "no-preload-086882"
	W0122 21:08:32.138621  199863 addons.go:247] addon dashboard should already be in state true
	I0122 21:08:32.138626  199863 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-086882"
	I0122 21:08:32.138654  199863 host.go:66] Checking if "no-preload-086882" exists ...
	I0122 21:08:32.138610  199863 addons.go:238] Setting addon metrics-server=true in "no-preload-086882"
	W0122 21:08:32.138697  199863 addons.go:247] addon metrics-server should already be in state true
	I0122 21:08:32.138581  199863 addons.go:238] Setting addon storage-provisioner=true in "no-preload-086882"
	W0122 21:08:32.138739  199863 addons.go:247] addon storage-provisioner should already be in state true
	I0122 21:08:32.138746  199863 host.go:66] Checking if "no-preload-086882" exists ...
	I0122 21:08:32.138768  199863 host.go:66] Checking if "no-preload-086882" exists ...
	I0122 21:08:32.139043  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:08:32.139073  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:08:32.139089  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:08:32.139102  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:08:32.139118  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:08:32.139148  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:08:32.139222  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:08:32.139290  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:08:32.139962  199863 out.go:177] * Verifying Kubernetes components...
	I0122 21:08:32.146807  199863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:08:32.155529  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41399
	I0122 21:08:32.155724  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45353
	I0122 21:08:32.156103  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:08:32.156224  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:08:32.156671  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:08:32.156694  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:08:32.156830  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:08:32.156853  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:08:32.157121  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:08:32.157242  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:08:32.157721  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:08:32.157758  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:08:32.158503  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:08:32.158552  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:08:32.158616  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0122 21:08:32.159015  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
	I0122 21:08:32.159015  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:08:32.159421  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:08:32.159533  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:08:32.159550  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:08:32.159954  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:08:32.159969  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:08:32.159985  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:08:32.160143  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetState
	I0122 21:08:32.161152  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:08:32.161763  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:08:32.161797  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:08:32.163367  199863 addons.go:238] Setting addon default-storageclass=true in "no-preload-086882"
	W0122 21:08:32.163386  199863 addons.go:247] addon default-storageclass should already be in state true
	I0122 21:08:32.163416  199863 host.go:66] Checking if "no-preload-086882" exists ...
	I0122 21:08:32.163662  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:08:32.163693  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:08:32.181131  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42149
	I0122 21:08:32.181538  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:08:32.181987  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:08:32.182014  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:08:32.182435  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:08:32.182620  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetState
	I0122 21:08:32.184274  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:08:32.186332  199863 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0122 21:08:32.187451  199863 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0122 21:08:32.187465  199863 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0122 21:08:32.187481  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:08:32.190393  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I0122 21:08:32.190818  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:08:32.190998  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:08:32.191258  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:08:32.191280  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:08:32.191625  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:08:32.191800  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:08:32.191815  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:08:32.191882  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:08:32.192081  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:08:32.192259  199863 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa Username:docker}
	I0122 21:08:32.192586  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:08:32.192809  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetState
	I0122 21:08:32.194640  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:08:32.196252  199863 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0122 21:08:32.197558  199863 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0122 21:08:32.198916  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0122 21:08:32.198936  199863 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0122 21:08:32.198954  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:08:32.202452  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:08:32.202917  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:08:32.202939  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:08:32.203093  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:08:32.203276  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:08:32.203409  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:08:32.203541  199863 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa Username:docker}
	I0122 21:08:32.208376  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40015
	I0122 21:08:32.208842  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:08:32.209246  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:08:32.209258  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:08:32.209614  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:08:32.209778  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40135
	I0122 21:08:32.209921  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetState
	I0122 21:08:32.210140  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:08:32.210615  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:08:32.210629  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:08:32.210958  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:08:32.211532  199863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:08:32.211570  199863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:08:32.211746  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:08:32.213679  199863 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:08:32.214942  199863 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:08:32.214955  199863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:08:32.214967  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:08:32.218635  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:08:32.219023  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:08:32.219040  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:08:32.219218  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:08:32.219422  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:08:32.219600  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:08:32.219722  199863 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa Username:docker}
	I0122 21:08:32.228135  199863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0122 21:08:32.228507  199863 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:08:32.228875  199863 main.go:141] libmachine: Using API Version  1
	I0122 21:08:32.228886  199863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:08:32.229209  199863 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:08:32.229353  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetState
	I0122 21:08:32.231202  199863 main.go:141] libmachine: (no-preload-086882) Calling .DriverName
	I0122 21:08:32.231431  199863 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:08:32.231448  199863 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:08:32.231466  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHHostname
	I0122 21:08:32.234419  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:08:32.234948  199863 main.go:141] libmachine: (no-preload-086882) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:6b:5d", ip: ""} in network mk-no-preload-086882: {Iface:virbr3 ExpiryTime:2025-01-22 22:03:46 +0000 UTC Type:0 Mac:52:54:00:ef:6b:5d Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-086882 Clientid:01:52:54:00:ef:6b:5d}
	I0122 21:08:32.234973  199863 main.go:141] libmachine: (no-preload-086882) DBG | domain no-preload-086882 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:6b:5d in network mk-no-preload-086882
	I0122 21:08:32.235122  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHPort
	I0122 21:08:32.235311  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHKeyPath
	I0122 21:08:32.235466  199863 main.go:141] libmachine: (no-preload-086882) Calling .GetSSHUsername
	I0122 21:08:32.235605  199863 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/no-preload-086882/id_rsa Username:docker}
	I0122 21:08:32.354262  199863 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:08:32.374842  199863 node_ready.go:35] waiting up to 6m0s for node "no-preload-086882" to be "Ready" ...
	I0122 21:08:32.393669  199863 node_ready.go:49] node "no-preload-086882" has status "Ready":"True"
	I0122 21:08:32.393703  199863 node_ready.go:38] duration metric: took 18.82866ms for node "no-preload-086882" to be "Ready" ...
	I0122 21:08:32.393716  199863 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:08:32.402126  199863 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z8m8n" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:32.442792  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0122 21:08:32.442826  199863 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0122 21:08:32.470338  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0122 21:08:32.470368  199863 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0122 21:08:32.482900  199863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:08:32.484433  199863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:08:32.555878  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0122 21:08:32.555915  199863 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0122 21:08:32.563451  199863 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0122 21:08:32.563475  199863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0122 21:08:32.633432  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0122 21:08:32.633459  199863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0122 21:08:32.642969  199863 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0122 21:08:32.642997  199863 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0122 21:08:32.787677  199863 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:08:32.787709  199863 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0122 21:08:32.793383  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0122 21:08:32.793406  199863 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0122 21:08:32.856948  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0122 21:08:32.856973  199863 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0122 21:08:32.906528  199863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0122 21:08:32.947453  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0122 21:08:32.947486  199863 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0122 21:08:33.071007  199863 main.go:141] libmachine: Making call to close driver server
	I0122 21:08:33.071033  199863 main.go:141] libmachine: (no-preload-086882) Calling .Close
	I0122 21:08:33.071316  199863 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:08:33.071335  199863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:08:33.071373  199863 main.go:141] libmachine: (no-preload-086882) DBG | Closing plugin on server side
	I0122 21:08:33.071424  199863 main.go:141] libmachine: Making call to close driver server
	I0122 21:08:33.071455  199863 main.go:141] libmachine: (no-preload-086882) Calling .Close
	I0122 21:08:33.071704  199863 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:08:33.071723  199863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:08:33.071739  199863 main.go:141] libmachine: (no-preload-086882) DBG | Closing plugin on server side
	I0122 21:08:33.078199  199863 main.go:141] libmachine: Making call to close driver server
	I0122 21:08:33.078224  199863 main.go:141] libmachine: (no-preload-086882) Calling .Close
	I0122 21:08:33.078482  199863 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:08:33.078500  199863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:08:33.105569  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0122 21:08:33.105596  199863 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0122 21:08:33.205787  199863 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:08:33.205815  199863 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0122 21:08:33.323096  199863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0122 21:08:33.776420  199863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.291948358s)
	I0122 21:08:33.776478  199863 main.go:141] libmachine: Making call to close driver server
	I0122 21:08:33.776493  199863 main.go:141] libmachine: (no-preload-086882) Calling .Close
	I0122 21:08:33.776876  199863 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:08:33.776895  199863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:08:33.776902  199863 main.go:141] libmachine: Making call to close driver server
	I0122 21:08:33.776910  199863 main.go:141] libmachine: (no-preload-086882) Calling .Close
	I0122 21:08:33.776879  199863 main.go:141] libmachine: (no-preload-086882) DBG | Closing plugin on server side
	I0122 21:08:33.777163  199863 main.go:141] libmachine: (no-preload-086882) DBG | Closing plugin on server side
	I0122 21:08:33.777209  199863 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:08:33.777222  199863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:08:34.209305  199863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302698585s)
	I0122 21:08:34.209359  199863 main.go:141] libmachine: Making call to close driver server
	I0122 21:08:34.209376  199863 main.go:141] libmachine: (no-preload-086882) Calling .Close
	I0122 21:08:34.209838  199863 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:08:34.209862  199863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:08:34.209872  199863 main.go:141] libmachine: Making call to close driver server
	I0122 21:08:34.209879  199863 main.go:141] libmachine: (no-preload-086882) Calling .Close
	I0122 21:08:34.210187  199863 main.go:141] libmachine: (no-preload-086882) DBG | Closing plugin on server side
	I0122 21:08:34.210279  199863 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:08:34.210294  199863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:08:34.210312  199863 addons.go:479] Verifying addon metrics-server=true in "no-preload-086882"
	I0122 21:08:34.410647  199863 pod_ready.go:103] pod "coredns-668d6bf9bc-z8m8n" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:34.858043  199863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.534890356s)
	I0122 21:08:34.858111  199863 main.go:141] libmachine: Making call to close driver server
	I0122 21:08:34.858126  199863 main.go:141] libmachine: (no-preload-086882) Calling .Close
	I0122 21:08:34.858416  199863 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:08:34.858457  199863 main.go:141] libmachine: (no-preload-086882) DBG | Closing plugin on server side
	I0122 21:08:34.858464  199863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:08:34.858489  199863 main.go:141] libmachine: Making call to close driver server
	I0122 21:08:34.858501  199863 main.go:141] libmachine: (no-preload-086882) Calling .Close
	I0122 21:08:34.858799  199863 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:08:34.858856  199863 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:08:34.858824  199863 main.go:141] libmachine: (no-preload-086882) DBG | Closing plugin on server side
	I0122 21:08:34.860517  199863 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-086882 addons enable metrics-server
	
	I0122 21:08:34.862142  199863 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0122 21:08:34.863626  199863 addons.go:514] duration metric: took 2.725205937s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0122 21:08:36.910570  199863 pod_ready.go:103] pod "coredns-668d6bf9bc-z8m8n" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:38.910679  199863 pod_ready.go:103] pod "coredns-668d6bf9bc-z8m8n" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:40.911049  199863 pod_ready.go:103] pod "coredns-668d6bf9bc-z8m8n" in "kube-system" namespace has status "Ready":"False"
	I0122 21:08:42.910404  199863 pod_ready.go:93] pod "coredns-668d6bf9bc-z8m8n" in "kube-system" namespace has status "Ready":"True"
	I0122 21:08:42.910433  199863 pod_ready.go:82] duration metric: took 10.508272975s for pod "coredns-668d6bf9bc-z8m8n" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:42.910445  199863 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zmk2l" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:42.916877  199863 pod_ready.go:93] pod "coredns-668d6bf9bc-zmk2l" in "kube-system" namespace has status "Ready":"True"
	I0122 21:08:42.916907  199863 pod_ready.go:82] duration metric: took 6.45336ms for pod "coredns-668d6bf9bc-zmk2l" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:42.916921  199863 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:42.923297  199863 pod_ready.go:93] pod "etcd-no-preload-086882" in "kube-system" namespace has status "Ready":"True"
	I0122 21:08:42.923325  199863 pod_ready.go:82] duration metric: took 6.392601ms for pod "etcd-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:42.923338  199863 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:42.927792  199863 pod_ready.go:93] pod "kube-apiserver-no-preload-086882" in "kube-system" namespace has status "Ready":"True"
	I0122 21:08:42.927820  199863 pod_ready.go:82] duration metric: took 4.472509ms for pod "kube-apiserver-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:42.927833  199863 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:42.932924  199863 pod_ready.go:93] pod "kube-controller-manager-no-preload-086882" in "kube-system" namespace has status "Ready":"True"
	I0122 21:08:42.932945  199863 pod_ready.go:82] duration metric: took 5.102515ms for pod "kube-controller-manager-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:42.932953  199863 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zrkm" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:43.307395  199863 pod_ready.go:93] pod "kube-proxy-6zrkm" in "kube-system" namespace has status "Ready":"True"
	I0122 21:08:43.307428  199863 pod_ready.go:82] duration metric: took 374.465983ms for pod "kube-proxy-6zrkm" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:43.307442  199863 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:43.711292  199863 pod_ready.go:93] pod "kube-scheduler-no-preload-086882" in "kube-system" namespace has status "Ready":"True"
	I0122 21:08:43.711315  199863 pod_ready.go:82] duration metric: took 403.864832ms for pod "kube-scheduler-no-preload-086882" in "kube-system" namespace to be "Ready" ...
	I0122 21:08:43.711324  199863 pod_ready.go:39] duration metric: took 11.317594065s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:08:43.711339  199863 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:08:43.711382  199863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:08:43.731381  199863 api_server.go:72] duration metric: took 11.593020113s to wait for apiserver process to appear ...
	I0122 21:08:43.731410  199863 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:08:43.731432  199863 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0122 21:08:43.736795  199863 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I0122 21:08:43.738193  199863 api_server.go:141] control plane version: v1.32.1
	I0122 21:08:43.738223  199863 api_server.go:131] duration metric: took 6.804396ms to wait for apiserver health ...
	I0122 21:08:43.738240  199863 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:08:43.914297  199863 system_pods.go:59] 9 kube-system pods found
	I0122 21:08:43.914325  199863 system_pods.go:61] "coredns-668d6bf9bc-z8m8n" [f239526a-64a2-4d43-8442-88b34b2f90dd] Running
	I0122 21:08:43.914330  199863 system_pods.go:61] "coredns-668d6bf9bc-zmk2l" [07cc6931-80c1-4e26-859f-28d85e5d7433] Running
	I0122 21:08:43.914334  199863 system_pods.go:61] "etcd-no-preload-086882" [e99ff48b-613a-4ff9-8131-41e640d9a09b] Running
	I0122 21:08:43.914337  199863 system_pods.go:61] "kube-apiserver-no-preload-086882" [c42ead4e-1acd-4299-aebf-1ced3be24d4f] Running
	I0122 21:08:43.914341  199863 system_pods.go:61] "kube-controller-manager-no-preload-086882" [29af664f-a33e-4434-898b-0461db855fec] Running
	I0122 21:08:43.914344  199863 system_pods.go:61] "kube-proxy-6zrkm" [30a55902-e674-4923-b718-4c6b89189a96] Running
	I0122 21:08:43.914347  199863 system_pods.go:61] "kube-scheduler-no-preload-086882" [b43aece7-b484-44a7-8961-427d0331663d] Running
	I0122 21:08:43.914352  199863 system_pods.go:61] "metrics-server-f79f97bbb-vrrbf" [ad6b1544-2e14-4185-99b2-06a343ca594f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0122 21:08:43.914356  199863 system_pods.go:61] "storage-provisioner" [32a0517c-e70f-4997-bee7-95d078b3bfc8] Running
	I0122 21:08:43.914364  199863 system_pods.go:74] duration metric: took 176.117591ms to wait for pod list to return data ...
	I0122 21:08:43.914373  199863 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:08:44.110386  199863 default_sa.go:45] found service account: "default"
	I0122 21:08:44.110422  199863 default_sa.go:55] duration metric: took 196.041206ms for default service account to be created ...
	I0122 21:08:44.110433  199863 system_pods.go:137] waiting for k8s-apps to be running ...
	I0122 21:08:44.309998  199863 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-086882 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-086882 -n no-preload-086882
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-086882 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-086882 logs -n 25: (1.199129361s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575 sudo cat                | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575 sudo cat                | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575 sudo cat                | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC |                     |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-988575                         | enable-default-cni-988575 | jenkins | v1.35.0 | 22 Jan 25 21:15 UTC | 22 Jan 25 21:15 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 21:13:52
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 21:13:52.462030  212748 out.go:345] Setting OutFile to fd 1 ...
	I0122 21:13:52.462138  212748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:13:52.462146  212748 out.go:358] Setting ErrFile to fd 2...
	I0122 21:13:52.462149  212748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 21:13:52.462330  212748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 21:13:52.462930  212748 out.go:352] Setting JSON to false
	I0122 21:13:52.464076  212748 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10567,"bootTime":1737569865,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 21:13:52.464180  212748 start.go:139] virtualization: kvm guest
	I0122 21:13:52.466534  212748 out.go:177] * [enable-default-cni-988575] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 21:13:52.467937  212748 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 21:13:52.467980  212748 notify.go:220] Checking for updates...
	I0122 21:13:52.471304  212748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 21:13:52.472659  212748 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 21:13:52.474010  212748 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:13:52.475352  212748 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 21:13:52.476756  212748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 21:13:52.478672  212748 config.go:182] Loaded profile config "bridge-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:13:52.478793  212748 config.go:182] Loaded profile config "default-k8s-diff-port-061998": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:13:52.478905  212748 config.go:182] Loaded profile config "no-preload-086882": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:13:52.479019  212748 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 21:13:52.519334  212748 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 21:13:52.520533  212748 start.go:297] selected driver: kvm2
	I0122 21:13:52.520547  212748 start.go:901] validating driver "kvm2" against <nil>
	I0122 21:13:52.520561  212748 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 21:13:52.521312  212748 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:13:52.521426  212748 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-150966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 21:13:52.538996  212748 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 21:13:52.539045  212748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0122 21:13:52.539248  212748 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0122 21:13:52.539288  212748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:13:52.539343  212748 cni.go:84] Creating CNI manager for "bridge"
	I0122 21:13:52.539356  212748 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 21:13:52.539419  212748 start.go:340] cluster config:
	{Name:enable-default-cni-988575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-988575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
ontainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:13:52.539516  212748 iso.go:125] acquiring lock: {Name:mkc3bf0604e328871936621dd0e0cda10261a449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 21:13:52.542335  212748 out.go:177] * Starting "enable-default-cni-988575" primary control-plane node in "enable-default-cni-988575" cluster
	I0122 21:13:52.543732  212748 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0122 21:13:52.543772  212748 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0122 21:13:52.543786  212748 cache.go:56] Caching tarball of preloaded images
	I0122 21:13:52.543865  212748 preload.go:172] Found /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0122 21:13:52.543879  212748 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0122 21:13:52.543999  212748 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/config.json ...
	I0122 21:13:52.544033  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/config.json: {Name:mk045d9ef235c448cc10a1d364b82bbe2bf70b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:13:52.544255  212748 start.go:360] acquireMachinesLock for enable-default-cni-988575: {Name:mkde076c0ff5ffaed1ac7d9ac4f697ecfb6e2cf2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0122 21:13:52.544319  212748 start.go:364] duration metric: took 37.18µs to acquireMachinesLock for "enable-default-cni-988575"
	I0122 21:13:52.544347  212748 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-988575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-de
fault-cni-988575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0122 21:13:52.544438  212748 start.go:125] createHost starting for "" (driver="kvm2")
	I0122 21:13:52.546145  212748 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0122 21:13:52.546289  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:13:52.546324  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:13:52.561887  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0122 21:13:52.562347  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:13:52.562865  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:13:52.562894  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:13:52.563253  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:13:52.563458  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetMachineName
	I0122 21:13:52.563604  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:13:52.563809  212748 start.go:159] libmachine.API.Create for "enable-default-cni-988575" (driver="kvm2")
	I0122 21:13:52.563853  212748 client.go:168] LocalClient.Create starting
	I0122 21:13:52.563881  212748 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem
	I0122 21:13:52.563908  212748 main.go:141] libmachine: Decoding PEM data...
	I0122 21:13:52.563932  212748 main.go:141] libmachine: Parsing certificate...
	I0122 21:13:52.564004  212748 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem
	I0122 21:13:52.564043  212748 main.go:141] libmachine: Decoding PEM data...
	I0122 21:13:52.564056  212748 main.go:141] libmachine: Parsing certificate...
	I0122 21:13:52.564077  212748 main.go:141] libmachine: Running pre-create checks...
	I0122 21:13:52.564089  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .PreCreateCheck
	I0122 21:13:52.564477  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetConfigRaw
	I0122 21:13:52.564860  212748 main.go:141] libmachine: Creating machine...
	I0122 21:13:52.564875  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Create
	I0122 21:13:52.565022  212748 main.go:141] libmachine: (enable-default-cni-988575) creating KVM machine...
	I0122 21:13:52.565041  212748 main.go:141] libmachine: (enable-default-cni-988575) creating network...
	I0122 21:13:52.566491  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found existing default KVM network
	I0122 21:13:52.567956  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.567804  212772 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:53:b5:67} reservation:<nil>}
	I0122 21:13:52.568844  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.568747  212772 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:eb:73} reservation:<nil>}
	I0122 21:13:52.569887  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.569789  212772 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:c9:b0:05} reservation:<nil>}
	I0122 21:13:52.570978  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.570894  212772 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003ea4f0}
	I0122 21:13:52.571054  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | created network xml: 
	I0122 21:13:52.571080  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | <network>
	I0122 21:13:52.571091  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   <name>mk-enable-default-cni-988575</name>
	I0122 21:13:52.571104  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   <dns enable='no'/>
	I0122 21:13:52.571116  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   
	I0122 21:13:52.571129  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0122 21:13:52.571138  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |     <dhcp>
	I0122 21:13:52.571143  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0122 21:13:52.571149  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |     </dhcp>
	I0122 21:13:52.571159  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   </ip>
	I0122 21:13:52.571166  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG |   
	I0122 21:13:52.571173  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | </network>
	I0122 21:13:52.571181  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | 
	I0122 21:13:52.576943  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | trying to create private KVM network mk-enable-default-cni-988575 192.168.72.0/24...
	I0122 21:13:52.651263  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | private KVM network mk-enable-default-cni-988575 192.168.72.0/24 created
	I0122 21:13:52.651299  212748 main.go:141] libmachine: (enable-default-cni-988575) setting up store path in /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575 ...
	I0122 21:13:52.651317  212748 main.go:141] libmachine: (enable-default-cni-988575) building disk image from file:///home/jenkins/minikube-integration/20288-150966/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 21:13:52.651377  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.651311  212772 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:13:52.651444  212748 main.go:141] libmachine: (enable-default-cni-988575) Downloading /home/jenkins/minikube-integration/20288-150966/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20288-150966/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0122 21:13:52.937242  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:52.937110  212772 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa...
	I0122 21:13:53.068321  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:53.068202  212772 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/enable-default-cni-988575.rawdisk...
	I0122 21:13:53.068350  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Writing magic tar header
	I0122 21:13:53.068364  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Writing SSH key tar header
	I0122 21:13:53.068373  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:53.068345  212772 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575 ...
	I0122 21:13:53.068500  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575
	I0122 21:13:53.068555  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube/machines
	I0122 21:13:53.068584  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575 (perms=drwx------)
	I0122 21:13:53.068596  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 21:13:53.068614  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20288-150966
	I0122 21:13:53.068626  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0122 21:13:53.068640  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home/jenkins
	I0122 21:13:53.068650  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | checking permissions on dir: /home
	I0122 21:13:53.068660  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | skipping /home - not owner
	I0122 21:13:53.068674  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube/machines (perms=drwxr-xr-x)
	I0122 21:13:53.068690  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration/20288-150966/.minikube (perms=drwxr-xr-x)
	I0122 21:13:53.068701  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration/20288-150966 (perms=drwxrwxr-x)
	I0122 21:13:53.068714  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0122 21:13:53.068724  212748 main.go:141] libmachine: (enable-default-cni-988575) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0122 21:13:53.068734  212748 main.go:141] libmachine: (enable-default-cni-988575) creating domain...
	I0122 21:13:53.069937  212748 main.go:141] libmachine: (enable-default-cni-988575) define libvirt domain using xml: 
	I0122 21:13:53.069970  212748 main.go:141] libmachine: (enable-default-cni-988575) <domain type='kvm'>
	I0122 21:13:53.069981  212748 main.go:141] libmachine: (enable-default-cni-988575)   <name>enable-default-cni-988575</name>
	I0122 21:13:53.069995  212748 main.go:141] libmachine: (enable-default-cni-988575)   <memory unit='MiB'>3072</memory>
	I0122 21:13:53.070004  212748 main.go:141] libmachine: (enable-default-cni-988575)   <vcpu>2</vcpu>
	I0122 21:13:53.070017  212748 main.go:141] libmachine: (enable-default-cni-988575)   <features>
	I0122 21:13:53.070023  212748 main.go:141] libmachine: (enable-default-cni-988575)     <acpi/>
	I0122 21:13:53.070028  212748 main.go:141] libmachine: (enable-default-cni-988575)     <apic/>
	I0122 21:13:53.070036  212748 main.go:141] libmachine: (enable-default-cni-988575)     <pae/>
	I0122 21:13:53.070043  212748 main.go:141] libmachine: (enable-default-cni-988575)     
	I0122 21:13:53.070055  212748 main.go:141] libmachine: (enable-default-cni-988575)   </features>
	I0122 21:13:53.070063  212748 main.go:141] libmachine: (enable-default-cni-988575)   <cpu mode='host-passthrough'>
	I0122 21:13:53.070076  212748 main.go:141] libmachine: (enable-default-cni-988575)   
	I0122 21:13:53.070088  212748 main.go:141] libmachine: (enable-default-cni-988575)   </cpu>
	I0122 21:13:53.070097  212748 main.go:141] libmachine: (enable-default-cni-988575)   <os>
	I0122 21:13:53.070107  212748 main.go:141] libmachine: (enable-default-cni-988575)     <type>hvm</type>
	I0122 21:13:53.070115  212748 main.go:141] libmachine: (enable-default-cni-988575)     <boot dev='cdrom'/>
	I0122 21:13:53.070127  212748 main.go:141] libmachine: (enable-default-cni-988575)     <boot dev='hd'/>
	I0122 21:13:53.070136  212748 main.go:141] libmachine: (enable-default-cni-988575)     <bootmenu enable='no'/>
	I0122 21:13:53.070140  212748 main.go:141] libmachine: (enable-default-cni-988575)   </os>
	I0122 21:13:53.070145  212748 main.go:141] libmachine: (enable-default-cni-988575)   <devices>
	I0122 21:13:53.070149  212748 main.go:141] libmachine: (enable-default-cni-988575)     <disk type='file' device='cdrom'>
	I0122 21:13:53.070158  212748 main.go:141] libmachine: (enable-default-cni-988575)       <source file='/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/boot2docker.iso'/>
	I0122 21:13:53.070166  212748 main.go:141] libmachine: (enable-default-cni-988575)       <target dev='hdc' bus='scsi'/>
	I0122 21:13:53.070171  212748 main.go:141] libmachine: (enable-default-cni-988575)       <readonly/>
	I0122 21:13:53.070176  212748 main.go:141] libmachine: (enable-default-cni-988575)     </disk>
	I0122 21:13:53.070181  212748 main.go:141] libmachine: (enable-default-cni-988575)     <disk type='file' device='disk'>
	I0122 21:13:53.070187  212748 main.go:141] libmachine: (enable-default-cni-988575)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0122 21:13:53.070195  212748 main.go:141] libmachine: (enable-default-cni-988575)       <source file='/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/enable-default-cni-988575.rawdisk'/>
	I0122 21:13:53.070199  212748 main.go:141] libmachine: (enable-default-cni-988575)       <target dev='hda' bus='virtio'/>
	I0122 21:13:53.070204  212748 main.go:141] libmachine: (enable-default-cni-988575)     </disk>
	I0122 21:13:53.070208  212748 main.go:141] libmachine: (enable-default-cni-988575)     <interface type='network'>
	I0122 21:13:53.070214  212748 main.go:141] libmachine: (enable-default-cni-988575)       <source network='mk-enable-default-cni-988575'/>
	I0122 21:13:53.070218  212748 main.go:141] libmachine: (enable-default-cni-988575)       <model type='virtio'/>
	I0122 21:13:53.070223  212748 main.go:141] libmachine: (enable-default-cni-988575)     </interface>
	I0122 21:13:53.070227  212748 main.go:141] libmachine: (enable-default-cni-988575)     <interface type='network'>
	I0122 21:13:53.070232  212748 main.go:141] libmachine: (enable-default-cni-988575)       <source network='default'/>
	I0122 21:13:53.070236  212748 main.go:141] libmachine: (enable-default-cni-988575)       <model type='virtio'/>
	I0122 21:13:53.070241  212748 main.go:141] libmachine: (enable-default-cni-988575)     </interface>
	I0122 21:13:53.070249  212748 main.go:141] libmachine: (enable-default-cni-988575)     <serial type='pty'>
	I0122 21:13:53.070254  212748 main.go:141] libmachine: (enable-default-cni-988575)       <target port='0'/>
	I0122 21:13:53.070263  212748 main.go:141] libmachine: (enable-default-cni-988575)     </serial>
	I0122 21:13:53.070268  212748 main.go:141] libmachine: (enable-default-cni-988575)     <console type='pty'>
	I0122 21:13:53.070277  212748 main.go:141] libmachine: (enable-default-cni-988575)       <target type='serial' port='0'/>
	I0122 21:13:53.070317  212748 main.go:141] libmachine: (enable-default-cni-988575)     </console>
	I0122 21:13:53.070339  212748 main.go:141] libmachine: (enable-default-cni-988575)     <rng model='virtio'>
	I0122 21:13:53.070352  212748 main.go:141] libmachine: (enable-default-cni-988575)       <backend model='random'>/dev/random</backend>
	I0122 21:13:53.070359  212748 main.go:141] libmachine: (enable-default-cni-988575)     </rng>
	I0122 21:13:53.070367  212748 main.go:141] libmachine: (enable-default-cni-988575)     
	I0122 21:13:53.070373  212748 main.go:141] libmachine: (enable-default-cni-988575)     
	I0122 21:13:53.070382  212748 main.go:141] libmachine: (enable-default-cni-988575)   </devices>
	I0122 21:13:53.070388  212748 main.go:141] libmachine: (enable-default-cni-988575) </domain>
	I0122 21:13:53.070404  212748 main.go:141] libmachine: (enable-default-cni-988575) 
	I0122 21:13:53.075192  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:a5:14:6c in network default
	I0122 21:13:53.075828  212748 main.go:141] libmachine: (enable-default-cni-988575) starting domain...
	I0122 21:13:53.075858  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:53.075867  212748 main.go:141] libmachine: (enable-default-cni-988575) ensuring networks are active...
	I0122 21:13:53.076644  212748 main.go:141] libmachine: (enable-default-cni-988575) Ensuring network default is active
	I0122 21:13:53.076984  212748 main.go:141] libmachine: (enable-default-cni-988575) Ensuring network mk-enable-default-cni-988575 is active
	I0122 21:13:53.077543  212748 main.go:141] libmachine: (enable-default-cni-988575) getting domain XML...
	I0122 21:13:53.078350  212748 main.go:141] libmachine: (enable-default-cni-988575) creating domain...
	I0122 21:13:54.434169  212748 main.go:141] libmachine: (enable-default-cni-988575) waiting for IP...
	I0122 21:13:54.435047  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:54.435503  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:54.435567  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:54.435496  212772 retry.go:31] will retry after 260.723128ms: waiting for domain to come up
	I0122 21:13:54.698112  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:54.698752  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:54.698808  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:54.698738  212772 retry.go:31] will retry after 344.421038ms: waiting for domain to come up
	I0122 21:13:55.045156  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:55.045738  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:55.045843  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:55.045724  212772 retry.go:31] will retry after 460.672457ms: waiting for domain to come up
	I0122 21:13:55.508426  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:55.509111  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:55.509142  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:55.509084  212772 retry.go:31] will retry after 539.824691ms: waiting for domain to come up
	I0122 21:13:56.050990  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:56.051505  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:56.051543  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:56.051454  212772 retry.go:31] will retry after 578.212643ms: waiting for domain to come up
	I0122 21:13:56.631107  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:56.631646  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:56.631720  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:56.631610  212772 retry.go:31] will retry after 658.680433ms: waiting for domain to come up
	I0122 21:13:57.291529  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:57.292055  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:57.292088  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:57.292032  212772 retry.go:31] will retry after 1.151478398s: waiting for domain to come up
	I0122 21:13:58.445714  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:58.446251  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:58.446292  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:58.446217  212772 retry.go:31] will retry after 904.224441ms: waiting for domain to come up
	I0122 21:13:59.352476  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:13:59.353064  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:13:59.353089  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:13:59.353039  212772 retry.go:31] will retry after 1.500303009s: waiting for domain to come up
	I0122 21:14:00.855018  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:00.855482  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:00.855509  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:00.855435  212772 retry.go:31] will retry after 1.760740196s: waiting for domain to come up
	I0122 21:14:02.617581  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:02.618106  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:02.618135  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:02.618070  212772 retry.go:31] will retry after 2.14599391s: waiting for domain to come up
	I0122 21:14:04.766356  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:04.766927  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:04.766953  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:04.766832  212772 retry.go:31] will retry after 3.47274679s: waiting for domain to come up
	I0122 21:14:08.241224  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:08.241679  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:08.241704  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:08.241643  212772 retry.go:31] will retry after 4.474921851s: waiting for domain to come up
	I0122 21:14:12.718227  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:12.718877  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find current IP address of domain enable-default-cni-988575 in network mk-enable-default-cni-988575
	I0122 21:14:12.718908  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | I0122 21:14:12.718845  212772 retry.go:31] will retry after 5.670113196s: waiting for domain to come up
	I0122 21:14:18.390428  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.390974  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has current primary IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.391006  212748 main.go:141] libmachine: (enable-default-cni-988575) found domain IP: 192.168.72.236
	I0122 21:14:18.391015  212748 main.go:141] libmachine: (enable-default-cni-988575) reserving static IP address...
	I0122 21:14:18.391415  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-988575", mac: "52:54:00:2a:3f:25", ip: "192.168.72.236"} in network mk-enable-default-cni-988575
	I0122 21:14:18.465163  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Getting to WaitForSSH function...
	I0122 21:14:18.465201  212748 main.go:141] libmachine: (enable-default-cni-988575) reserved static IP address 192.168.72.236 for domain enable-default-cni-988575
	I0122 21:14:18.465215  212748 main.go:141] libmachine: (enable-default-cni-988575) waiting for SSH...
	I0122 21:14:18.468087  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.468463  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.468497  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.468668  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Using SSH client type: external
	I0122 21:14:18.468691  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Using SSH private key: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa (-rw-------)
	I0122 21:14:18.468735  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0122 21:14:18.468754  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | About to run SSH command:
	I0122 21:14:18.468770  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | exit 0
	I0122 21:14:18.594036  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | SSH cmd err, output: <nil>: 
	I0122 21:14:18.594316  212748 main.go:141] libmachine: (enable-default-cni-988575) KVM machine creation complete
	I0122 21:14:18.594638  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetConfigRaw
	I0122 21:14:18.595194  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:18.595358  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:18.595517  212748 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0122 21:14:18.595534  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetState
	I0122 21:14:18.597006  212748 main.go:141] libmachine: Detecting operating system of created instance...
	I0122 21:14:18.597022  212748 main.go:141] libmachine: Waiting for SSH to be available...
	I0122 21:14:18.597030  212748 main.go:141] libmachine: Getting to WaitForSSH function...
	I0122 21:14:18.597038  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:18.599567  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.599989  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.600019  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.600146  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:18.600366  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.600523  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.600649  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:18.600873  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:18.601079  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:18.601096  212748 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0122 21:14:18.709367  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:14:18.709394  212748 main.go:141] libmachine: Detecting the provisioner...
	I0122 21:14:18.709405  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:18.712583  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.712901  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.712932  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.713098  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:18.713315  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.713460  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.713577  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:18.713743  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:18.713891  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:18.713902  212748 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0122 21:14:18.822488  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0122 21:14:18.822567  212748 main.go:141] libmachine: found compatible host: buildroot
	I0122 21:14:18.822582  212748 main.go:141] libmachine: Provisioning with buildroot...
	I0122 21:14:18.822594  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetMachineName
	I0122 21:14:18.822851  212748 buildroot.go:166] provisioning hostname "enable-default-cni-988575"
	I0122 21:14:18.822885  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetMachineName
	I0122 21:14:18.823114  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:18.825940  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.826303  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.826335  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.826494  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:18.826678  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.826831  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.826996  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:18.827154  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:18.827343  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:18.827361  212748 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-988575 && echo "enable-default-cni-988575" | sudo tee /etc/hostname
	I0122 21:14:18.947616  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-988575
	
	I0122 21:14:18.947647  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:18.950553  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.950947  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:18.950972  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:18.951225  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:18.951446  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.951599  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:18.951750  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:18.951984  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:18.952170  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:18.952189  212748 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-988575' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-988575/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-988575' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0122 21:14:19.066558  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0122 21:14:19.066589  212748 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20288-150966/.minikube CaCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20288-150966/.minikube}
	I0122 21:14:19.066631  212748 buildroot.go:174] setting up certificates
	I0122 21:14:19.066642  212748 provision.go:84] configureAuth start
	I0122 21:14:19.066655  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetMachineName
	I0122 21:14:19.066952  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetIP
	I0122 21:14:19.069744  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.070117  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.070149  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.070288  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.072309  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.072607  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.072637  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.072705  212748 provision.go:143] copyHostCerts
	I0122 21:14:19.072795  212748 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem, removing ...
	I0122 21:14:19.072807  212748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem
	I0122 21:14:19.072873  212748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/key.pem (1675 bytes)
	I0122 21:14:19.073012  212748 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem, removing ...
	I0122 21:14:19.073023  212748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem
	I0122 21:14:19.073050  212748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/ca.pem (1078 bytes)
	I0122 21:14:19.073114  212748 exec_runner.go:144] found /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem, removing ...
	I0122 21:14:19.073121  212748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem
	I0122 21:14:19.073141  212748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20288-150966/.minikube/cert.pem (1123 bytes)
	I0122 21:14:19.073199  212748 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-988575 san=[127.0.0.1 192.168.72.236 enable-default-cni-988575 localhost minikube]
	I0122 21:14:19.172137  212748 provision.go:177] copyRemoteCerts
	I0122 21:14:19.172198  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0122 21:14:19.172221  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.175114  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.175491  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.175526  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.175686  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.175857  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.175975  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.176090  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:19.261340  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0122 21:14:19.286924  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0122 21:14:19.311436  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0122 21:14:19.335640  212748 provision.go:87] duration metric: took 268.982512ms to configureAuth
	I0122 21:14:19.335668  212748 buildroot.go:189] setting minikube options for container-runtime
	I0122 21:14:19.335819  212748 config.go:182] Loaded profile config "enable-default-cni-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:14:19.335842  212748 main.go:141] libmachine: Checking connection to Docker...
	I0122 21:14:19.335856  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetURL
	I0122 21:14:19.337207  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | using libvirt version 6000000
	I0122 21:14:19.339361  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.339676  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.339709  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.339861  212748 main.go:141] libmachine: Docker is up and running!
	I0122 21:14:19.339875  212748 main.go:141] libmachine: Reticulating splines...
	I0122 21:14:19.339882  212748 client.go:171] duration metric: took 26.776019518s to LocalClient.Create
	I0122 21:14:19.339905  212748 start.go:167] duration metric: took 26.77609661s to libmachine.API.Create "enable-default-cni-988575"
	I0122 21:14:19.339918  212748 start.go:293] postStartSetup for "enable-default-cni-988575" (driver="kvm2")
	I0122 21:14:19.339931  212748 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0122 21:14:19.339959  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.340221  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0122 21:14:19.340253  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.342393  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.342696  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.342729  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.342842  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.342988  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.343108  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.343250  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:19.427771  212748 ssh_runner.go:195] Run: cat /etc/os-release
	I0122 21:14:19.431650  212748 info.go:137] Remote host: Buildroot 2023.02.9
	I0122 21:14:19.431684  212748 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/addons for local assets ...
	I0122 21:14:19.431763  212748 filesync.go:126] Scanning /home/jenkins/minikube-integration/20288-150966/.minikube/files for local assets ...
	I0122 21:14:19.431855  212748 filesync.go:149] local asset: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem -> 1582712.pem in /etc/ssl/certs
	I0122 21:14:19.431961  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0122 21:14:19.442056  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem --> /etc/ssl/certs/1582712.pem (1708 bytes)
	I0122 21:14:19.464446  212748 start.go:296] duration metric: took 124.512955ms for postStartSetup
	I0122 21:14:19.464511  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetConfigRaw
	I0122 21:14:19.465103  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetIP
	I0122 21:14:19.467761  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.468160  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.468192  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.468416  212748 profile.go:143] Saving config to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/config.json ...
	I0122 21:14:19.468600  212748 start.go:128] duration metric: took 26.924150387s to createHost
	I0122 21:14:19.468632  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.471643  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.472067  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.472100  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.472259  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.472452  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.472630  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.472773  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.472937  212748 main.go:141] libmachine: Using SSH client type: native
	I0122 21:14:19.473132  212748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I0122 21:14:19.473145  212748 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0122 21:14:19.586584  212748 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737580459.566029650
	
	I0122 21:14:19.586607  212748 fix.go:216] guest clock: 1737580459.566029650
	I0122 21:14:19.586614  212748 fix.go:229] Guest: 2025-01-22 21:14:19.56602965 +0000 UTC Remote: 2025-01-22 21:14:19.468618964 +0000 UTC m=+27.045457740 (delta=97.410686ms)
	I0122 21:14:19.586639  212748 fix.go:200] guest clock delta is within tolerance: 97.410686ms
	I0122 21:14:19.586646  212748 start.go:83] releasing machines lock for "enable-default-cni-988575", held for 27.04231258s
	I0122 21:14:19.586671  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.586929  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetIP
	I0122 21:14:19.589854  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.590297  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.590336  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.590469  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.591039  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.591232  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:19.591336  212748 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0122 21:14:19.591397  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.591454  212748 ssh_runner.go:195] Run: cat /version.json
	I0122 21:14:19.591480  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:19.594144  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.594350  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.594515  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.594538  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.594669  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.594843  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:19.594856  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.594872  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:19.595048  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:19.595050  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.595281  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:19.595326  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:19.595477  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:19.595616  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:19.697764  212748 ssh_runner.go:195] Run: systemctl --version
	I0122 21:14:19.703608  212748 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0122 21:14:19.709903  212748 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0122 21:14:19.709995  212748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0122 21:14:19.725145  212748 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0122 21:14:19.725164  212748 start.go:495] detecting cgroup driver to use...
	I0122 21:14:19.725233  212748 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0122 21:14:19.754557  212748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0122 21:14:19.767298  212748 docker.go:217] disabling cri-docker service (if available) ...
	I0122 21:14:19.767357  212748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0122 21:14:19.781338  212748 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0122 21:14:19.794364  212748 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0122 21:14:19.917036  212748 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0122 21:14:20.057993  212748 docker.go:233] disabling docker service ...
	I0122 21:14:20.058069  212748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0122 21:14:20.072068  212748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0122 21:14:20.084357  212748 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0122 21:14:20.232819  212748 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0122 21:14:20.364857  212748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0122 21:14:20.377774  212748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0122 21:14:20.395048  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0122 21:14:20.406101  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0122 21:14:20.417078  212748 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0122 21:14:20.417147  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0122 21:14:20.428174  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 21:14:20.438691  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0122 21:14:20.448932  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0122 21:14:20.459787  212748 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0122 21:14:20.470777  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0122 21:14:20.481308  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0122 21:14:20.491411  212748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0122 21:14:20.501617  212748 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0122 21:14:20.512416  212748 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0122 21:14:20.512475  212748 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0122 21:14:20.526215  212748 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0122 21:14:20.535803  212748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:14:20.658501  212748 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 21:14:20.686913  212748 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0122 21:14:20.687011  212748 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 21:14:20.694182  212748 retry.go:31] will retry after 1.006796171s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0122 21:14:21.701278  212748 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0122 21:14:21.707269  212748 start.go:563] Will wait 60s for crictl version
	I0122 21:14:21.707335  212748 ssh_runner.go:195] Run: which crictl
	I0122 21:14:21.711692  212748 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0122 21:14:21.749454  212748 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.23
	RuntimeApiVersion:  v1
	I0122 21:14:21.749535  212748 ssh_runner.go:195] Run: containerd --version
	I0122 21:14:21.774308  212748 ssh_runner.go:195] Run: containerd --version
	I0122 21:14:21.801692  212748 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
	I0122 21:14:21.803066  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetIP
	I0122 21:14:21.806023  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:21.806402  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:21.806434  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:21.806607  212748 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0122 21:14:21.810687  212748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:14:21.823144  212748 kubeadm.go:883] updating cluster {Name:enable-default-cni-988575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-988575 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.236 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0122 21:14:21.823250  212748 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0122 21:14:21.823307  212748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:14:21.855145  212748 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0122 21:14:21.855208  212748 ssh_runner.go:195] Run: which lz4
	I0122 21:14:21.858888  212748 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0122 21:14:21.862698  212748 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0122 21:14:21.862733  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (398131433 bytes)
	I0122 21:14:23.148193  212748 containerd.go:563] duration metric: took 1.289327237s to copy over tarball
	I0122 21:14:23.148289  212748 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0122 21:14:25.356962  212748 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.208632175s)
	I0122 21:14:25.357001  212748 containerd.go:570] duration metric: took 2.208769374s to extract the tarball
	I0122 21:14:25.357013  212748 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0122 21:14:25.397308  212748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:14:25.516558  212748 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0122 21:14:25.547883  212748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:14:25.583791  212748 retry.go:31] will retry after 264.622937ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-22T21:14:25Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0122 21:14:25.849327  212748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0122 21:14:25.886521  212748 containerd.go:627] all images are preloaded for containerd runtime.
	I0122 21:14:25.886549  212748 cache_images.go:84] Images are preloaded, skipping loading
	I0122 21:14:25.886564  212748 kubeadm.go:934] updating node { 192.168.72.236 8443 v1.32.1 containerd true true} ...
	I0122 21:14:25.886700  212748 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-988575 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-988575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0122 21:14:25.886770  212748 ssh_runner.go:195] Run: sudo crictl info
	I0122 21:14:25.919854  212748 cni.go:84] Creating CNI manager for "bridge"
	I0122 21:14:25.919875  212748 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0122 21:14:25.919894  212748 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.236 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-988575 NodeName:enable-default-cni-988575 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0122 21:14:25.919989  212748 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-988575"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.236"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.236"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0122 21:14:25.920045  212748 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0122 21:14:25.931000  212748 binaries.go:44] Found k8s binaries, skipping transfer
	I0122 21:14:25.931066  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0122 21:14:25.940134  212748 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0122 21:14:25.957006  212748 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0122 21:14:25.972902  212748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2321 bytes)
	I0122 21:14:25.988975  212748 ssh_runner.go:195] Run: grep 192.168.72.236	control-plane.minikube.internal$ /etc/hosts
	I0122 21:14:25.992647  212748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0122 21:14:26.004697  212748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:14:26.119955  212748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:14:26.140771  212748 certs.go:68] Setting up /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575 for IP: 192.168.72.236
	I0122 21:14:26.140794  212748 certs.go:194] generating shared ca certs ...
	I0122 21:14:26.140809  212748 certs.go:226] acquiring lock for ca certs: {Name:mk53e9e3df6ffb3fa8285a86887df441ff5826d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.140965  212748 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.key
	I0122 21:14:26.141008  212748 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.key
	I0122 21:14:26.141021  212748 certs.go:256] generating profile certs ...
	I0122 21:14:26.141078  212748 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.key
	I0122 21:14:26.141091  212748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt with IP's: []
	I0122 21:14:26.208946  212748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt ...
	I0122 21:14:26.208977  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: {Name:mk9883dcae0c1cd3f2f0a907151ab66214df6bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.246185  212748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.key ...
	I0122 21:14:26.246234  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.key: {Name:mk33633cded10207e2390ad08a3dd8fc1c7b5df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.271797  212748 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key.04d9a45f
	I0122 21:14:26.271867  212748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt.04d9a45f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.236]
	I0122 21:14:26.558342  212748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt.04d9a45f ...
	I0122 21:14:26.558372  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt.04d9a45f: {Name:mk023b50773fed80cc80f0a8399195809b6f6481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.558539  212748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key.04d9a45f ...
	I0122 21:14:26.558555  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key.04d9a45f: {Name:mkbd6f96068489529590a700ebae5eb8ec4ea1e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.558652  212748 certs.go:381] copying /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt.04d9a45f -> /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt
	I0122 21:14:26.558744  212748 certs.go:385] copying /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key.04d9a45f -> /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key
	I0122 21:14:26.558797  212748 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.key
	I0122 21:14:26.558813  212748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.crt with IP's: []
	I0122 21:14:26.728616  212748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.crt ...
	I0122 21:14:26.728653  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.crt: {Name:mk60d2d3357b997bcee82a68de0c9bab86dcbb59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.728839  212748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.key ...
	I0122 21:14:26.728856  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.key: {Name:mkb55e3f07cb505298a7cbb607001b0bfa7eb986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:26.729056  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271.pem (1338 bytes)
	W0122 21:14:26.729099  212748 certs.go:480] ignoring /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271_empty.pem, impossibly tiny 0 bytes
	I0122 21:14:26.729111  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca-key.pem (1675 bytes)
	I0122 21:14:26.729133  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/ca.pem (1078 bytes)
	I0122 21:14:26.729166  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/cert.pem (1123 bytes)
	I0122 21:14:26.729187  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/certs/key.pem (1675 bytes)
	I0122 21:14:26.729226  212748 certs.go:484] found cert: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem (1708 bytes)
	I0122 21:14:26.729797  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0122 21:14:26.755665  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0122 21:14:26.779724  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0122 21:14:26.806425  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0122 21:14:26.835884  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0122 21:14:26.866639  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0122 21:14:26.890368  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0122 21:14:26.912613  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0122 21:14:26.937566  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/certs/158271.pem --> /usr/share/ca-certificates/158271.pem (1338 bytes)
	I0122 21:14:26.960509  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/ssl/certs/1582712.pem --> /usr/share/ca-certificates/1582712.pem (1708 bytes)
	I0122 21:14:26.983691  212748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0122 21:14:27.007053  212748 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0122 21:14:27.024722  212748 ssh_runner.go:195] Run: openssl version
	I0122 21:14:27.030397  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1582712.pem && ln -fs /usr/share/ca-certificates/1582712.pem /etc/ssl/certs/1582712.pem"
	I0122 21:14:27.042485  212748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1582712.pem
	I0122 21:14:27.046760  212748 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 22 20:06 /usr/share/ca-certificates/1582712.pem
	I0122 21:14:27.046822  212748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1582712.pem
	I0122 21:14:27.052554  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1582712.pem /etc/ssl/certs/3ec20f2e.0"
	I0122 21:14:27.064452  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0122 21:14:27.076200  212748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:14:27.080539  212748 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 22 19:58 /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:14:27.080592  212748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0122 21:14:27.086105  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0122 21:14:27.096656  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/158271.pem && ln -fs /usr/share/ca-certificates/158271.pem /etc/ssl/certs/158271.pem"
	I0122 21:14:27.107085  212748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/158271.pem
	I0122 21:14:27.111204  212748 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 22 20:06 /usr/share/ca-certificates/158271.pem
	I0122 21:14:27.111264  212748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/158271.pem
	I0122 21:14:27.116650  212748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/158271.pem /etc/ssl/certs/51391683.0"
	I0122 21:14:27.130386  212748 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0122 21:14:27.134455  212748 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0122 21:14:27.134507  212748 kubeadm.go:392] StartCluster: {Name:enable-default-cni-988575 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-988575 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.236 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 21:14:27.134606  212748 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0122 21:14:27.134689  212748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0122 21:14:27.174454  212748 cri.go:89] found id: ""
	I0122 21:14:27.174525  212748 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0122 21:14:27.187319  212748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0122 21:14:27.196689  212748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0122 21:14:27.207555  212748 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0122 21:14:27.207592  212748 kubeadm.go:157] found existing configuration files:
	
	I0122 21:14:27.207634  212748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0122 21:14:27.216519  212748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0122 21:14:27.216577  212748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0122 21:14:27.226617  212748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0122 21:14:27.236183  212748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0122 21:14:27.236259  212748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0122 21:14:27.245822  212748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0122 21:14:27.254665  212748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0122 21:14:27.254722  212748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0122 21:14:27.264848  212748 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0122 21:14:27.273731  212748 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0122 21:14:27.273810  212748 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0122 21:14:27.283009  212748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0122 21:14:27.333040  212748 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0122 21:14:27.333164  212748 kubeadm.go:310] [preflight] Running pre-flight checks
	I0122 21:14:27.431695  212748 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0122 21:14:27.431822  212748 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0122 21:14:27.431956  212748 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0122 21:14:27.442198  212748 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0122 21:14:27.490172  212748 out.go:235]   - Generating certificates and keys ...
	I0122 21:14:27.490295  212748 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0122 21:14:27.490384  212748 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0122 21:14:27.570591  212748 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0122 21:14:27.685569  212748 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0122 21:14:27.785177  212748 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0122 21:14:27.976556  212748 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0122 21:14:28.097838  212748 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0122 21:14:28.098048  212748 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-988575 localhost] and IPs [192.168.72.236 127.0.0.1 ::1]
	I0122 21:14:28.185800  212748 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0122 21:14:28.186044  212748 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-988575 localhost] and IPs [192.168.72.236 127.0.0.1 ::1]
	I0122 21:14:28.286073  212748 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0122 21:14:28.486672  212748 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0122 21:14:28.568468  212748 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0122 21:14:28.568563  212748 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0122 21:14:28.976287  212748 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0122 21:14:29.146740  212748 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0122 21:14:29.595476  212748 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0122 21:14:29.847221  212748 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0122 21:14:30.156659  212748 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0122 21:14:30.157193  212748 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0122 21:14:30.159563  212748 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0122 21:14:30.161558  212748 out.go:235]   - Booting up control plane ...
	I0122 21:14:30.161681  212748 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0122 21:14:30.161787  212748 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0122 21:14:30.161901  212748 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0122 21:14:30.178285  212748 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0122 21:14:30.184859  212748 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0122 21:14:30.184917  212748 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0122 21:14:30.320444  212748 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0122 21:14:30.320643  212748 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0122 21:14:31.321913  212748 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001924991s
	I0122 21:14:31.322028  212748 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0122 21:14:35.821765  212748 kubeadm.go:310] [api-check] The API server is healthy after 4.501929141s
	I0122 21:14:35.833862  212748 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0122 21:14:35.848628  212748 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0122 21:14:35.870989  212748 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0122 21:14:35.871171  212748 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-988575 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0122 21:14:35.885261  212748 kubeadm.go:310] [bootstrap-token] Using token: df9fky.0iinyjuwhr05t9v8
	I0122 21:14:35.886772  212748 out.go:235]   - Configuring RBAC rules ...
	I0122 21:14:35.886911  212748 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0122 21:14:35.893522  212748 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0122 21:14:35.901172  212748 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0122 21:14:35.904477  212748 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0122 21:14:35.907919  212748 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0122 21:14:35.911173  212748 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0122 21:14:36.228855  212748 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0122 21:14:36.653094  212748 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0122 21:14:37.227413  212748 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0122 21:14:37.228201  212748 kubeadm.go:310] 
	I0122 21:14:37.228286  212748 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0122 21:14:37.228297  212748 kubeadm.go:310] 
	I0122 21:14:37.228370  212748 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0122 21:14:37.228378  212748 kubeadm.go:310] 
	I0122 21:14:37.228409  212748 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0122 21:14:37.228501  212748 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0122 21:14:37.228560  212748 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0122 21:14:37.228570  212748 kubeadm.go:310] 
	I0122 21:14:37.228651  212748 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0122 21:14:37.228661  212748 kubeadm.go:310] 
	I0122 21:14:37.228728  212748 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0122 21:14:37.228741  212748 kubeadm.go:310] 
	I0122 21:14:37.228795  212748 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0122 21:14:37.228860  212748 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0122 21:14:37.228932  212748 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0122 21:14:37.228941  212748 kubeadm.go:310] 
	I0122 21:14:37.229080  212748 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0122 21:14:37.229194  212748 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0122 21:14:37.229204  212748 kubeadm.go:310] 
	I0122 21:14:37.229320  212748 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token df9fky.0iinyjuwhr05t9v8 \
	I0122 21:14:37.229465  212748 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88af174ababab22d3fd32c76a81e6e1b2f6ebf2a7a258215c191241a8730421a \
	I0122 21:14:37.229497  212748 kubeadm.go:310] 	--control-plane 
	I0122 21:14:37.229506  212748 kubeadm.go:310] 
	I0122 21:14:37.229654  212748 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0122 21:14:37.229671  212748 kubeadm.go:310] 
	I0122 21:14:37.229786  212748 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token df9fky.0iinyjuwhr05t9v8 \
	I0122 21:14:37.229908  212748 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88af174ababab22d3fd32c76a81e6e1b2f6ebf2a7a258215c191241a8730421a 
	I0122 21:14:37.231087  212748 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0122 21:14:37.231117  212748 cni.go:84] Creating CNI manager for "bridge"
	I0122 21:14:37.233453  212748 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0122 21:14:37.234647  212748 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0122 21:14:37.246007  212748 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0122 21:14:37.265630  212748 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0122 21:14:37.265768  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:37.265791  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-988575 minikube.k8s.io/updated_at=2025_01_22T21_14_37_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4 minikube.k8s.io/name=enable-default-cni-988575 minikube.k8s.io/primary=true
	I0122 21:14:37.284648  212748 ops.go:34] apiserver oom_adj: -16
	I0122 21:14:37.375150  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:37.875733  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:38.375457  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:38.875854  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:39.375610  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:39.875900  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:40.375504  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:40.875942  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:41.376236  212748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0122 21:14:41.479382  212748 kubeadm.go:1113] duration metric: took 4.213688497s to wait for elevateKubeSystemPrivileges
	I0122 21:14:41.479425  212748 kubeadm.go:394] duration metric: took 14.344921437s to StartCluster
	I0122 21:14:41.479449  212748 settings.go:142] acquiring lock: {Name:mkfbfc304d1e9b2b80529e33af6a426e89d118a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:41.479527  212748 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 21:14:41.481154  212748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20288-150966/kubeconfig: {Name:mk70478f45a79a3b41e7b46029f97939b1511ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0122 21:14:41.481438  212748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0122 21:14:41.481456  212748 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0122 21:14:41.481543  212748 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-988575"
	I0122 21:14:41.481561  212748 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-988575"
	I0122 21:14:41.481591  212748 host.go:66] Checking if "enable-default-cni-988575" exists ...
	I0122 21:14:41.481434  212748 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.236 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0122 21:14:41.481625  212748 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-988575"
	I0122 21:14:41.481647  212748 config.go:182] Loaded profile config "enable-default-cni-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 21:14:41.481661  212748 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-988575"
	I0122 21:14:41.482060  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.482082  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.482093  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.482114  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.483953  212748 out.go:177] * Verifying Kubernetes components...
	I0122 21:14:41.485418  212748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0122 21:14:41.498219  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44911
	I0122 21:14:41.498819  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.499472  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.499506  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.499869  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.500149  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
	I0122 21:14:41.500155  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetState
	I0122 21:14:41.500577  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.501134  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.501152  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.501532  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.502161  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.502189  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.503982  212748 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-988575"
	I0122 21:14:41.504030  212748 host.go:66] Checking if "enable-default-cni-988575" exists ...
	I0122 21:14:41.504412  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.504465  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.520906  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0122 21:14:41.521338  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.521861  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.521887  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.522373  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.522604  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetState
	I0122 21:14:41.524388  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:41.526057  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0122 21:14:41.526074  212748 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0122 21:14:41.526518  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.527106  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.527131  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.527533  212748 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:14:41.527551  212748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0122 21:14:41.527565  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:41.527680  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.528088  212748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 21:14:41.528119  212748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 21:14:41.530246  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:41.530628  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:41.530645  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:41.530846  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:41.530989  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:41.531078  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:41.531905  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:41.551055  212748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0122 21:14:41.551680  212748 main.go:141] libmachine: () Calling .GetVersion
	I0122 21:14:41.552329  212748 main.go:141] libmachine: Using API Version  1
	I0122 21:14:41.552355  212748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 21:14:41.552920  212748 main.go:141] libmachine: () Calling .GetMachineName
	I0122 21:14:41.553124  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetState
	I0122 21:14:41.554736  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .DriverName
	I0122 21:14:41.554997  212748 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0122 21:14:41.555014  212748 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0122 21:14:41.555033  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHHostname
	I0122 21:14:41.558034  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:41.558472  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:3f:25", ip: ""} in network mk-enable-default-cni-988575: {Iface:virbr1 ExpiryTime:2025-01-22 22:14:07 +0000 UTC Type:0 Mac:52:54:00:2a:3f:25 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:enable-default-cni-988575 Clientid:01:52:54:00:2a:3f:25}
	I0122 21:14:41.558498  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | domain enable-default-cni-988575 has defined IP address 192.168.72.236 and MAC address 52:54:00:2a:3f:25 in network mk-enable-default-cni-988575
	I0122 21:14:41.558719  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHPort
	I0122 21:14:41.558959  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHKeyPath
	I0122 21:14:41.559140  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .GetSSHUsername
	I0122 21:14:41.559327  212748 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/enable-default-cni-988575/id_rsa Username:docker}
	I0122 21:14:41.741072  212748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0122 21:14:41.741111  212748 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0122 21:14:41.830050  212748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0122 21:14:41.850093  212748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0122 21:14:42.337101  212748 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0122 21:14:42.338167  212748 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-988575" to be "Ready" ...
	I0122 21:14:42.355841  212748 node_ready.go:49] node "enable-default-cni-988575" has status "Ready":"True"
	I0122 21:14:42.355877  212748 node_ready.go:38] duration metric: took 17.683559ms for node "enable-default-cni-988575" to be "Ready" ...
	I0122 21:14:42.355890  212748 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:14:42.384983  212748 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace to be "Ready" ...
	I0122 21:14:42.805295  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.805330  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.805339  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.805350  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.805621  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Closing plugin on server side
	I0122 21:14:42.805621  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.805644  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.805653  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.805660  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.805667  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Closing plugin on server side
	I0122 21:14:42.805695  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.805704  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.805720  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.805728  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.806039  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Closing plugin on server side
	I0122 21:14:42.806042  212748 main.go:141] libmachine: (enable-default-cni-988575) DBG | Closing plugin on server side
	I0122 21:14:42.806052  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.806074  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.806080  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.806088  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.820226  212748 main.go:141] libmachine: Making call to close driver server
	I0122 21:14:42.820246  212748 main.go:141] libmachine: (enable-default-cni-988575) Calling .Close
	I0122 21:14:42.820552  212748 main.go:141] libmachine: Successfully made call to close driver server
	I0122 21:14:42.820571  212748 main.go:141] libmachine: Making call to close connection to plugin binary
	I0122 21:14:42.822239  212748 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0122 21:14:42.823426  212748 addons.go:514] duration metric: took 1.34196753s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0122 21:14:42.846707  212748 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-988575" context rescaled to 1 replicas
	I0122 21:14:44.391239  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:46.890787  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:48.891457  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:50.892478  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:53.390442  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:55.390937  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:57.391475  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:14:59.890363  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:01.891101  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:03.891544  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:06.391260  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:08.891979  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:11.391874  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:13.890858  212748 pod_ready.go:103] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"False"
	I0122 21:15:14.391385  212748 pod_ready.go:93] pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.391414  212748 pod_ready.go:82] duration metric: took 32.006398889s for pod "coredns-668d6bf9bc-8k2mr" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.391431  212748 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-t62dc" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.393204  212748 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-t62dc" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-t62dc" not found
	I0122 21:15:14.393231  212748 pod_ready.go:82] duration metric: took 1.79275ms for pod "coredns-668d6bf9bc-t62dc" in "kube-system" namespace to be "Ready" ...
	E0122 21:15:14.393241  212748 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-t62dc" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-t62dc" not found
	I0122 21:15:14.393252  212748 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.397371  212748 pod_ready.go:93] pod "etcd-enable-default-cni-988575" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.397397  212748 pod_ready.go:82] duration metric: took 4.137052ms for pod "etcd-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.397406  212748 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.401206  212748 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-988575" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.401224  212748 pod_ready.go:82] duration metric: took 3.811097ms for pod "kube-apiserver-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.401235  212748 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.405039  212748 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-988575" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.405056  212748 pod_ready.go:82] duration metric: took 3.815782ms for pod "kube-controller-manager-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.405064  212748 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-pqfgf" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.588746  212748 pod_ready.go:93] pod "kube-proxy-pqfgf" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.588771  212748 pod_ready.go:82] duration metric: took 183.700915ms for pod "kube-proxy-pqfgf" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.588781  212748 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.988925  212748 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-988575" in "kube-system" namespace has status "Ready":"True"
	I0122 21:15:14.988961  212748 pod_ready.go:82] duration metric: took 400.171514ms for pod "kube-scheduler-enable-default-cni-988575" in "kube-system" namespace to be "Ready" ...
	I0122 21:15:14.988974  212748 pod_ready.go:39] duration metric: took 32.633070501s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0122 21:15:14.988998  212748 api_server.go:52] waiting for apiserver process to appear ...
	I0122 21:15:14.989065  212748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 21:15:15.003050  212748 api_server.go:72] duration metric: took 33.521423742s to wait for apiserver process to appear ...
	I0122 21:15:15.003081  212748 api_server.go:88] waiting for apiserver healthz status ...
	I0122 21:15:15.003104  212748 api_server.go:253] Checking apiserver healthz at https://192.168.72.236:8443/healthz ...
	I0122 21:15:15.007405  212748 api_server.go:279] https://192.168.72.236:8443/healthz returned 200:
	ok
	I0122 21:15:15.008265  212748 api_server.go:141] control plane version: v1.32.1
	I0122 21:15:15.008291  212748 api_server.go:131] duration metric: took 5.201626ms to wait for apiserver health ...
	I0122 21:15:15.008300  212748 system_pods.go:43] waiting for kube-system pods to appear ...
	I0122 21:15:15.190943  212748 system_pods.go:59] 7 kube-system pods found
	I0122 21:15:15.190980  212748 system_pods.go:61] "coredns-668d6bf9bc-8k2mr" [e3982f26-ae3b-4628-99a6-4d6cbcf75579] Running
	I0122 21:15:15.190986  212748 system_pods.go:61] "etcd-enable-default-cni-988575" [a3418942-728d-4bcd-a56a-b1b40b3c9480] Running
	I0122 21:15:15.190990  212748 system_pods.go:61] "kube-apiserver-enable-default-cni-988575" [50840094-887a-4220-8537-bc0aa3e0096f] Running
	I0122 21:15:15.190993  212748 system_pods.go:61] "kube-controller-manager-enable-default-cni-988575" [33fba83d-f193-4951-bec6-060ab5644e77] Running
	I0122 21:15:15.190996  212748 system_pods.go:61] "kube-proxy-pqfgf" [dbfd454c-8d4f-41fc-b630-9687e1cc00de] Running
	I0122 21:15:15.190999  212748 system_pods.go:61] "kube-scheduler-enable-default-cni-988575" [f8e36fef-016a-4800-b727-629672d1dd3a] Running
	I0122 21:15:15.191002  212748 system_pods.go:61] "storage-provisioner" [de70f162-242c-4c9f-83be-78eb9d99e78b] Running
	I0122 21:15:15.191008  212748 system_pods.go:74] duration metric: took 182.701656ms to wait for pod list to return data ...
	I0122 21:15:15.191021  212748 default_sa.go:34] waiting for default service account to be created ...
	I0122 21:15:15.389632  212748 default_sa.go:45] found service account: "default"
	I0122 21:15:15.389660  212748 default_sa.go:55] duration metric: took 198.632639ms for default service account to be created ...
	I0122 21:15:15.389673  212748 system_pods.go:137] waiting for k8s-apps to be running ...
	I0122 21:15:15.591099  212748 system_pods.go:87] 7 kube-system pods found
	I0122 21:15:15.789898  212748 system_pods.go:105] "coredns-668d6bf9bc-8k2mr" [e3982f26-ae3b-4628-99a6-4d6cbcf75579] Running
	I0122 21:15:15.789933  212748 system_pods.go:105] "etcd-enable-default-cni-988575" [a3418942-728d-4bcd-a56a-b1b40b3c9480] Running
	I0122 21:15:15.789943  212748 system_pods.go:105] "kube-apiserver-enable-default-cni-988575" [50840094-887a-4220-8537-bc0aa3e0096f] Running
	I0122 21:15:15.789969  212748 system_pods.go:105] "kube-controller-manager-enable-default-cni-988575" [33fba83d-f193-4951-bec6-060ab5644e77] Running
	I0122 21:15:15.789986  212748 system_pods.go:105] "kube-proxy-pqfgf" [dbfd454c-8d4f-41fc-b630-9687e1cc00de] Running
	I0122 21:15:15.789995  212748 system_pods.go:105] "kube-scheduler-enable-default-cni-988575" [f8e36fef-016a-4800-b727-629672d1dd3a] Running
	I0122 21:15:15.790008  212748 system_pods.go:105] "storage-provisioner" [de70f162-242c-4c9f-83be-78eb9d99e78b] Running
	I0122 21:15:15.790024  212748 system_pods.go:147] duration metric: took 400.342486ms to wait for k8s-apps to be running ...
	I0122 21:15:15.790039  212748 system_svc.go:44] waiting for kubelet service to be running ....
	I0122 21:15:15.790104  212748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 21:15:15.805060  212748 system_svc.go:56] duration metric: took 15.009919ms WaitForService to wait for kubelet
	I0122 21:15:15.805095  212748 kubeadm.go:582] duration metric: took 34.323472111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0122 21:15:15.805117  212748 node_conditions.go:102] verifying NodePressure condition ...
	I0122 21:15:15.989985  212748 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0122 21:15:15.990024  212748 node_conditions.go:123] node cpu capacity is 2
	I0122 21:15:15.990040  212748 node_conditions.go:105] duration metric: took 184.917088ms to run NodePressure ...
	I0122 21:15:15.990057  212748 start.go:241] waiting for startup goroutines ...
	I0122 21:15:15.990067  212748 start.go:246] waiting for cluster config update ...
	I0122 21:15:15.990082  212748 start.go:255] writing updated cluster config ...
	I0122 21:15:15.990362  212748 ssh_runner.go:195] Run: rm -f paused
	I0122 21:15:16.038542  212748 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0122 21:15:16.040655  212748 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-988575" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	fc1cd5f0f8fa1       523cad1a4df73       15 seconds ago      Exited              dashboard-metrics-scraper   9                   ac8f20666692c       dashboard-metrics-scraper-86c6bf9756-kcdms
	f16de89782186       07655ddf2eebe       21 minutes ago      Running             kubernetes-dashboard        0                   d605dd67e6b1a       kubernetes-dashboard-7779f9b69b-k85jb
	c52b0f6c93984       6e38f40d628db       21 minutes ago      Running             storage-provisioner         0                   cf95b57d8186a       storage-provisioner
	2760d1635e187       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   abe4858b8af95       coredns-668d6bf9bc-zmk2l
	c5d43e7aa1bd1       c69fa2e9cbf5f       21 minutes ago      Running             coredns                     0                   2b1ce9bc7ea57       coredns-668d6bf9bc-z8m8n
	e95e3f7588ed5       e29f9c7391fd9       21 minutes ago      Running             kube-proxy                  0                   11f05e192ac6d       kube-proxy-6zrkm
	579e4003887e4       019ee182b58e2       21 minutes ago      Running             kube-controller-manager     2                   a0e42e80f670a       kube-controller-manager-no-preload-086882
	4345d865aa7e7       95c0bda56fc4d       21 minutes ago      Running             kube-apiserver              2                   30e6ac3bb147c       kube-apiserver-no-preload-086882
	0771c8f7ef7be       a9e7e6b294baf       21 minutes ago      Running             etcd                        2                   6383ce13e8db7       etcd-no-preload-086882
	6baff21c63d7e       2b0d6572d062c       21 minutes ago      Running             kube-scheduler              2                   e0f020e236ea3       kube-scheduler-no-preload-086882
	
	
	==> containerd <==
	Jan 22 21:24:22 no-preload-086882 containerd[555]: time="2025-01-22T21:24:22.136455519Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 22 21:24:22 no-preload-086882 containerd[555]: time="2025-01-22T21:24:22.138310555Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 22 21:24:22 no-preload-086882 containerd[555]: time="2025-01-22T21:24:22.138395632Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 22 21:24:54 no-preload-086882 containerd[555]: time="2025-01-22T21:24:54.127569402Z" level=info msg="CreateContainer within sandbox \"ac8f20666692c4c9203803da63809dcbdce83cf5f39a6a58dc6028cac7243e02\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,}"
	Jan 22 21:24:54 no-preload-086882 containerd[555]: time="2025-01-22T21:24:54.151355485Z" level=info msg="CreateContainer within sandbox \"ac8f20666692c4c9203803da63809dcbdce83cf5f39a6a58dc6028cac7243e02\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a\""
	Jan 22 21:24:54 no-preload-086882 containerd[555]: time="2025-01-22T21:24:54.152341095Z" level=info msg="StartContainer for \"0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a\""
	Jan 22 21:24:54 no-preload-086882 containerd[555]: time="2025-01-22T21:24:54.206862123Z" level=info msg="StartContainer for \"0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a\" returns successfully"
	Jan 22 21:24:54 no-preload-086882 containerd[555]: time="2025-01-22T21:24:54.253191382Z" level=info msg="shim disconnected" id=0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a namespace=k8s.io
	Jan 22 21:24:54 no-preload-086882 containerd[555]: time="2025-01-22T21:24:54.253465226Z" level=warning msg="cleaning up after shim disconnected" id=0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a namespace=k8s.io
	Jan 22 21:24:54 no-preload-086882 containerd[555]: time="2025-01-22T21:24:54.253640612Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 22 21:24:54 no-preload-086882 containerd[555]: time="2025-01-22T21:24:54.697023708Z" level=info msg="RemoveContainer for \"347ec2b6a2e9e0876e6faefe9a5ed59bd02233d3ee3a1442091b7139b03ec39a\""
	Jan 22 21:24:54 no-preload-086882 containerd[555]: time="2025-01-22T21:24:54.703518889Z" level=info msg="RemoveContainer for \"347ec2b6a2e9e0876e6faefe9a5ed59bd02233d3ee3a1442091b7139b03ec39a\" returns successfully"
	Jan 22 21:29:32 no-preload-086882 containerd[555]: time="2025-01-22T21:29:32.126638936Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 22 21:29:32 no-preload-086882 containerd[555]: time="2025-01-22T21:29:32.135997140Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Jan 22 21:29:32 no-preload-086882 containerd[555]: time="2025-01-22T21:29:32.138196276Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Jan 22 21:29:32 no-preload-086882 containerd[555]: time="2025-01-22T21:29:32.138270906Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 22 21:29:57 no-preload-086882 containerd[555]: time="2025-01-22T21:29:57.127900379Z" level=info msg="CreateContainer within sandbox \"ac8f20666692c4c9203803da63809dcbdce83cf5f39a6a58dc6028cac7243e02\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
	Jan 22 21:29:57 no-preload-086882 containerd[555]: time="2025-01-22T21:29:57.157662857Z" level=info msg="CreateContainer within sandbox \"ac8f20666692c4c9203803da63809dcbdce83cf5f39a6a58dc6028cac7243e02\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"fc1cd5f0f8fa11d9b28e848be11dfb1b2bbbba024a1b4c2c8de65a8d28a79416\""
	Jan 22 21:29:57 no-preload-086882 containerd[555]: time="2025-01-22T21:29:57.158733919Z" level=info msg="StartContainer for \"fc1cd5f0f8fa11d9b28e848be11dfb1b2bbbba024a1b4c2c8de65a8d28a79416\""
	Jan 22 21:29:57 no-preload-086882 containerd[555]: time="2025-01-22T21:29:57.210561735Z" level=info msg="StartContainer for \"fc1cd5f0f8fa11d9b28e848be11dfb1b2bbbba024a1b4c2c8de65a8d28a79416\" returns successfully"
	Jan 22 21:29:57 no-preload-086882 containerd[555]: time="2025-01-22T21:29:57.262733884Z" level=info msg="shim disconnected" id=fc1cd5f0f8fa11d9b28e848be11dfb1b2bbbba024a1b4c2c8de65a8d28a79416 namespace=k8s.io
	Jan 22 21:29:57 no-preload-086882 containerd[555]: time="2025-01-22T21:29:57.262791298Z" level=warning msg="cleaning up after shim disconnected" id=fc1cd5f0f8fa11d9b28e848be11dfb1b2bbbba024a1b4c2c8de65a8d28a79416 namespace=k8s.io
	Jan 22 21:29:57 no-preload-086882 containerd[555]: time="2025-01-22T21:29:57.262800938Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 22 21:29:57 no-preload-086882 containerd[555]: time="2025-01-22T21:29:57.396738941Z" level=info msg="RemoveContainer for \"0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a\""
	Jan 22 21:29:57 no-preload-086882 containerd[555]: time="2025-01-22T21:29:57.404027830Z" level=info msg="RemoveContainer for \"0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a\" returns successfully"
	
	
	==> coredns [2760d1635e1879967272523f01003ad5fe4437c9c38df92cd7ec0095bc1187cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c5d43e7aa1bd1d4aa1bc4f41b3b9c36b28af70af331cf7e5799d0f4126972c3a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-086882
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-086882
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3e9f161b4385e25ed54b565cd944f46507981c4
	                    minikube.k8s.io/name=no-preload-086882
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_22T21_08_27_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 Jan 2025 21:08:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-086882
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 Jan 2025 21:30:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 Jan 2025 21:29:41 +0000   Wed, 22 Jan 2025 21:08:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 Jan 2025 21:29:41 +0000   Wed, 22 Jan 2025 21:08:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 Jan 2025 21:29:41 +0000   Wed, 22 Jan 2025 21:08:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 Jan 2025 21:29:41 +0000   Wed, 22 Jan 2025 21:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    no-preload-086882
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ff9c28e61274659bd631f20eb9a8361
	  System UUID:                6ff9c28e-6127-4659-bd63-1f20eb9a8361
	  Boot ID:                    37b97869-e45e-419c-9abe-100d5a89b6b0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-z8m8n                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-zmk2l                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-086882                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-086882              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-086882     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-6zrkm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-086882              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-vrrbf                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-kcdms    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-k85jb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-086882 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-086882 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-086882 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-086882 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-086882 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-086882 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-086882 event: Registered Node no-preload-086882 in Controller
	
	
	==> dmesg <==
	[  +0.037134] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.852750] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.934246] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.563511] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.836829] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +0.061853] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067352] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
	[  +0.189977] systemd-fstab-generator[505]: Ignoring "noauto" option for root device
	[  +0.111655] systemd-fstab-generator[517]: Ignoring "noauto" option for root device
	[  +0.311491] systemd-fstab-generator[547]: Ignoring "noauto" option for root device
	[  +1.620302] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +2.237594] systemd-fstab-generator[747]: Ignoring "noauto" option for root device
	[  +0.824779] kauditd_printk_skb: 225 callbacks suppressed
	[Jan22 21:04] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.940620] kauditd_printk_skb: 56 callbacks suppressed
	[  +6.021496] kauditd_printk_skb: 9 callbacks suppressed
	[Jan22 21:08] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +6.086238] systemd-fstab-generator[3440]: Ignoring "noauto" option for root device
	[  +0.080256] kauditd_printk_skb: 87 callbacks suppressed
	[  +5.268660] systemd-fstab-generator[3542]: Ignoring "noauto" option for root device
	[  +0.101202] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.194040] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.024586] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [0771c8f7ef7bed2fb2357c43387bacde427764327c4be64788e1f62a18e0d437] <==
	{"level":"warn","ts":"2025-01-22T21:13:18.309417Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.457766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-f79f97bbb-vrrbf.181d1fe8a0ebdbf9\" limit:1 ","response":"range_response_count:1 size:823"}
	{"level":"info","ts":"2025-01-22T21:13:18.309581Z","caller":"traceutil/trace.go:171","msg":"trace[2101197128] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-f79f97bbb-vrrbf.181d1fe8a0ebdbf9; range_end:; response_count:1; response_revision:864; }","duration":"181.709726ms","start":"2025-01-22T21:13:18.127855Z","end":"2025-01-22T21:13:18.309565Z","steps":["trace[2101197128] 'range keys from in-memory index tree'  (duration: 180.070587ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-22T21:14:26.309886Z","caller":"traceutil/trace.go:171","msg":"trace[764022986] transaction","detail":"{read_only:false; response_revision:926; number_of_response:1; }","duration":"189.480843ms","start":"2025-01-22T21:14:26.120389Z","end":"2025-01-22T21:14:26.309870Z","steps":["trace[764022986] 'process raft request'  (duration: 189.23985ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:14:26.649796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.783155ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8080184026985538509 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:924 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-22T21:14:26.649937Z","caller":"traceutil/trace.go:171","msg":"trace[519031229] linearizableReadLoop","detail":"{readStateIndex:1011; appliedIndex:1010; }","duration":"307.366191ms","start":"2025-01-22T21:14:26.342515Z","end":"2025-01-22T21:14:26.649881Z","steps":["trace[519031229] 'read index received'  (duration: 85.365369ms)","trace[519031229] 'applied index is now lower than readState.Index'  (duration: 221.99937ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-22T21:14:26.650393Z","caller":"traceutil/trace.go:171","msg":"trace[1220762286] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"334.353821ms","start":"2025-01-22T21:14:26.316023Z","end":"2025-01-22T21:14:26.650376Z","steps":["trace[1220762286] 'process raft request'  (duration: 111.911954ms)","trace[1220762286] 'compare'  (duration: 221.627369ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-22T21:14:26.650616Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-22T21:14:26.316010Z","time spent":"334.445199ms","remote":"127.0.0.1:42560","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:924 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-22T21:14:26.650996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.48556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.177\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-01-22T21:14:26.651092Z","caller":"traceutil/trace.go:171","msg":"trace[1468946923] range","detail":"{range_begin:/registry/masterleases/192.168.39.177; range_end:; response_count:1; response_revision:927; }","duration":"308.581725ms","start":"2025-01-22T21:14:26.342468Z","end":"2025-01-22T21:14:26.651049Z","steps":["trace[1468946923] 'agreement among raft nodes before linearized reading'  (duration: 308.443379ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:14:26.651334Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-22T21:14:26.342449Z","time spent":"308.826769ms","remote":"127.0.0.1:42444","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":1,"response size":159,"request content":"key:\"/registry/masterleases/192.168.39.177\" limit:1 "}
	{"level":"warn","ts":"2025-01-22T21:14:26.651626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.808529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:14:26.651694Z","caller":"traceutil/trace.go:171","msg":"trace[567414228] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:927; }","duration":"193.892194ms","start":"2025-01-22T21:14:26.457787Z","end":"2025-01-22T21:14:26.651679Z","steps":["trace[567414228] 'agreement among raft nodes before linearized reading'  (duration: 193.806753ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:14:26.652178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.829593ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-22T21:14:26.652225Z","caller":"traceutil/trace.go:171","msg":"trace[35811928] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:927; }","duration":"177.899568ms","start":"2025-01-22T21:14:26.474317Z","end":"2025-01-22T21:14:26.652217Z","steps":["trace[35811928] 'agreement among raft nodes before linearized reading'  (duration: 177.836935ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-22T21:14:26.896561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.647077ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8080184026985538516 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.177\" mod_revision:918 > success:<request_put:<key:\"/registry/masterleases/192.168.39.177\" value_size:67 lease:8080184026985538512 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.177\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-22T21:14:26.897253Z","caller":"traceutil/trace.go:171","msg":"trace[1532696446] transaction","detail":"{read_only:false; response_revision:928; number_of_response:1; }","duration":"176.637332ms","start":"2025-01-22T21:14:26.720599Z","end":"2025-01-22T21:14:26.897236Z","steps":["trace[1532696446] 'process raft request'  (duration: 59.261034ms)","trace[1532696446] 'compare'  (duration: 116.58153ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-22T21:18:23.229975Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":869}
	{"level":"info","ts":"2025-01-22T21:18:23.265899Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":869,"took":"35.410332ms","hash":4105064000,"current-db-size-bytes":2965504,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2965504,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-22T21:18:23.266076Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4105064000,"revision":869,"compact-revision":-1}
	{"level":"info","ts":"2025-01-22T21:23:23.238417Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1121}
	{"level":"info","ts":"2025-01-22T21:23:23.242393Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1121,"took":"3.651491ms","hash":332905398,"current-db-size-bytes":2965504,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1781760,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-22T21:23:23.242446Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":332905398,"revision":1121,"compact-revision":869}
	{"level":"info","ts":"2025-01-22T21:28:23.246078Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1373}
	{"level":"info","ts":"2025-01-22T21:28:23.249493Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1373,"took":"3.148719ms","hash":2101075959,"current-db-size-bytes":2965504,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1888256,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-01-22T21:28:23.249536Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2101075959,"revision":1373,"compact-revision":1121}
	
	
	==> kernel <==
	 21:30:12 up 26 min,  0 users,  load average: 0.15, 0.16, 0.17
	Linux no-preload-086882 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4345d865aa7e776a586875231675b80946a937e178d55608524e16b8b3550a11] <==
	I0122 21:26:25.707694       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0122 21:26:25.708845       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0122 21:28:24.705498       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:28:24.705837       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0122 21:28:25.707843       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:28:25.707942       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0122 21:28:25.708154       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:28:25.708333       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0122 21:28:25.709089       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0122 21:28:25.709472       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0122 21:29:25.709847       1 handler_proxy.go:99] no RequestInfo found in the context
	W0122 21:29:25.709857       1 handler_proxy.go:99] no RequestInfo found in the context
	E0122 21:29:25.710265       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0122 21:29:25.710569       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0122 21:29:25.711675       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0122 21:29:25.711920       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [579e4003887e4178281833ae2e817d965b96b5e7f52f4ac1d65b228efededa1c] <==
	E0122 21:25:31.438854       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:25:31.503615       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:26:01.445324       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:26:01.509862       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:26:31.451518       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:26:31.517201       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:27:01.457450       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:27:01.525890       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:27:31.464981       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:27:31.534657       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:28:01.472670       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:28:01.541643       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:28:31.477984       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:28:31.548285       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:29:01.483734       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:29:01.556491       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0122 21:29:31.489719       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:29:31.563647       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0122 21:29:41.430284       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-086882"
	I0122 21:29:47.140391       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="109.77µs"
	I0122 21:29:57.410439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="46.652µs"
	I0122 21:30:00.486692       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="51.771µs"
	I0122 21:30:01.141188       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="115.476µs"
	E0122 21:30:01.495921       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0122 21:30:01.571287       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [e95e3f7588ed51ab92e3db81b797cc772e2aff833e18d3b41df3466b30bdf46a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0122 21:08:33.168597       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0122 21:08:33.266384       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.177"]
	E0122 21:08:33.266435       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0122 21:08:33.353797       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0122 21:08:33.353828       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0122 21:08:33.353851       1 server_linux.go:170] "Using iptables Proxier"
	I0122 21:08:33.359231       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0122 21:08:33.359482       1 server.go:497] "Version info" version="v1.32.1"
	I0122 21:08:33.359494       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0122 21:08:33.360823       1 config.go:199] "Starting service config controller"
	I0122 21:08:33.360845       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0122 21:08:33.360871       1 config.go:105] "Starting endpoint slice config controller"
	I0122 21:08:33.360874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0122 21:08:33.361382       1 config.go:329] "Starting node config controller"
	I0122 21:08:33.361389       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0122 21:08:33.461279       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0122 21:08:33.461321       1 shared_informer.go:320] Caches are synced for service config
	I0122 21:08:33.461704       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6baff21c63d7ecbbd00a0666d0561bda24d6ed2953e2db949803f7d7164ea9ca] <==
	W0122 21:08:24.803592       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0122 21:08:24.803824       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:24.804023       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0122 21:08:24.804189       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:25.616701       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0122 21:08:25.616839       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:25.673531       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0122 21:08:25.673768       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:25.730281       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0122 21:08:25.730530       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:25.762906       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0122 21:08:25.763071       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:25.820587       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0122 21:08:25.820719       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:25.927795       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0122 21:08:25.927933       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:25.939465       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0122 21:08:25.939857       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:25.940847       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0122 21:08:25.941024       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:25.983291       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0122 21:08:25.983533       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0122 21:08:26.013986       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0122 21:08:26.014191       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0122 21:08:27.869432       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 22 21:29:18 no-preload-086882 kubelet[3447]: E0122 21:29:18.125741    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kcdms_kubernetes-dashboard(0fc12af0-f22a-48f6-bb78-8f5a16bbda3f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kcdms" podUID="0fc12af0-f22a-48f6-bb78-8f5a16bbda3f"
	Jan 22 21:29:21 no-preload-086882 kubelet[3447]: E0122 21:29:21.125606    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vrrbf" podUID="ad6b1544-2e14-4185-99b2-06a343ca594f"
	Jan 22 21:29:27 no-preload-086882 kubelet[3447]: E0122 21:29:27.144903    3447 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 22 21:29:27 no-preload-086882 kubelet[3447]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 22 21:29:27 no-preload-086882 kubelet[3447]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 22 21:29:27 no-preload-086882 kubelet[3447]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 22 21:29:27 no-preload-086882 kubelet[3447]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 22 21:29:31 no-preload-086882 kubelet[3447]: I0122 21:29:31.125096    3447 scope.go:117] "RemoveContainer" containerID="0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a"
	Jan 22 21:29:31 no-preload-086882 kubelet[3447]: E0122 21:29:31.126053    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kcdms_kubernetes-dashboard(0fc12af0-f22a-48f6-bb78-8f5a16bbda3f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kcdms" podUID="0fc12af0-f22a-48f6-bb78-8f5a16bbda3f"
	Jan 22 21:29:32 no-preload-086882 kubelet[3447]: E0122 21:29:32.138535    3447 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 22 21:29:32 no-preload-086882 kubelet[3447]: E0122 21:29:32.138822    3447 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 22 21:29:32 no-preload-086882 kubelet[3447]: E0122 21:29:32.139066    3447 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2fsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-vrrbf_kube-system(ad6b1544-2e14-4185-99b2-06a343ca594f): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 22 21:29:32 no-preload-086882 kubelet[3447]: E0122 21:29:32.140569    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vrrbf" podUID="ad6b1544-2e14-4185-99b2-06a343ca594f"
	Jan 22 21:29:43 no-preload-086882 kubelet[3447]: I0122 21:29:43.126091    3447 scope.go:117] "RemoveContainer" containerID="0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a"
	Jan 22 21:29:43 no-preload-086882 kubelet[3447]: E0122 21:29:43.126630    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kcdms_kubernetes-dashboard(0fc12af0-f22a-48f6-bb78-8f5a16bbda3f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kcdms" podUID="0fc12af0-f22a-48f6-bb78-8f5a16bbda3f"
	Jan 22 21:29:47 no-preload-086882 kubelet[3447]: E0122 21:29:47.125852    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vrrbf" podUID="ad6b1544-2e14-4185-99b2-06a343ca594f"
	Jan 22 21:29:57 no-preload-086882 kubelet[3447]: I0122 21:29:57.124900    3447 scope.go:117] "RemoveContainer" containerID="0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a"
	Jan 22 21:29:57 no-preload-086882 kubelet[3447]: I0122 21:29:57.391992    3447 scope.go:117] "RemoveContainer" containerID="0c5c0409bda0c2b4fa581b98cfaf96c8b73fa06ed36945fc4b3b1a65e2ab365a"
	Jan 22 21:29:57 no-preload-086882 kubelet[3447]: I0122 21:29:57.392399    3447 scope.go:117] "RemoveContainer" containerID="fc1cd5f0f8fa11d9b28e848be11dfb1b2bbbba024a1b4c2c8de65a8d28a79416"
	Jan 22 21:29:57 no-preload-086882 kubelet[3447]: E0122 21:29:57.395463    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kcdms_kubernetes-dashboard(0fc12af0-f22a-48f6-bb78-8f5a16bbda3f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kcdms" podUID="0fc12af0-f22a-48f6-bb78-8f5a16bbda3f"
	Jan 22 21:30:00 no-preload-086882 kubelet[3447]: I0122 21:30:00.473427    3447 scope.go:117] "RemoveContainer" containerID="fc1cd5f0f8fa11d9b28e848be11dfb1b2bbbba024a1b4c2c8de65a8d28a79416"
	Jan 22 21:30:00 no-preload-086882 kubelet[3447]: E0122 21:30:00.473598    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kcdms_kubernetes-dashboard(0fc12af0-f22a-48f6-bb78-8f5a16bbda3f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kcdms" podUID="0fc12af0-f22a-48f6-bb78-8f5a16bbda3f"
	Jan 22 21:30:01 no-preload-086882 kubelet[3447]: E0122 21:30:01.125949    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vrrbf" podUID="ad6b1544-2e14-4185-99b2-06a343ca594f"
	Jan 22 21:30:12 no-preload-086882 kubelet[3447]: I0122 21:30:12.125017    3447 scope.go:117] "RemoveContainer" containerID="fc1cd5f0f8fa11d9b28e848be11dfb1b2bbbba024a1b4c2c8de65a8d28a79416"
	Jan 22 21:30:12 no-preload-086882 kubelet[3447]: E0122 21:30:12.125239    3447 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kcdms_kubernetes-dashboard(0fc12af0-f22a-48f6-bb78-8f5a16bbda3f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kcdms" podUID="0fc12af0-f22a-48f6-bb78-8f5a16bbda3f"
	
	
	==> kubernetes-dashboard [f16de8978218691b98e9527b2e77661daada32f7cde251abcc70f906d0fc8008] <==
	2025/01/22 21:18:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:18:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:19:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:19:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:20:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:20:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:21:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:21:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:22:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:22:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:23:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:23:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:24:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:24:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:25:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:25:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:26:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:26:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:27:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:27:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:28:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:28:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:29:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:29:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/22 21:30:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [c52b0f6c939845c4fe8dfc2ef5209579c72cc4d6b6fb633f83eef9eff6364732] <==
	I0122 21:08:34.465075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0122 21:08:34.492590       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0122 21:08:34.492710       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0122 21:08:34.512628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0122 21:08:34.512761       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-086882_0bf8f465-5b13-4cd1-91a3-681082491df3!
	I0122 21:08:34.513584       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2968d3a4-e1ca-4ea9-9165-52e1dc7d92dc", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-086882_0bf8f465-5b13-4cd1-91a3-681082491df3 became leader
	I0122 21:08:34.617267       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-086882_0bf8f465-5b13-4cd1-91a3-681082491df3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-086882 -n no-preload-086882
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-086882 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-vrrbf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-086882 describe pod metrics-server-f79f97bbb-vrrbf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-086882 describe pod metrics-server-f79f97bbb-vrrbf: exit status 1 (64.683266ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-vrrbf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-086882 describe pod metrics-server-f79f97bbb-vrrbf: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1598.13s)

                                                
                                    

Test pass (280/316)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.23
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 4.65
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 87.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 208.83
29 TestAddons/serial/Volcano 40.39
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 15.14
36 TestAddons/parallel/Ingress 20.32
37 TestAddons/parallel/InspektorGadget 10.84
38 TestAddons/parallel/MetricsServer 5.7
40 TestAddons/parallel/CSI 59.83
41 TestAddons/parallel/Headlamp 19.77
42 TestAddons/parallel/CloudSpanner 5.54
43 TestAddons/parallel/LocalPath 56.34
44 TestAddons/parallel/NvidiaDevicePlugin 5.63
45 TestAddons/parallel/Yakd 11.84
47 TestAddons/StoppedEnableDisable 91.26
48 TestCertOptions 46.18
49 TestCertExpiration 280.92
51 TestForceSystemdFlag 69.91
52 TestForceSystemdEnv 98.93
54 TestKVMDriverInstallOrUpdate 6.04
58 TestErrorSpam/setup 43
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.47
62 TestErrorSpam/unpause 1.58
63 TestErrorSpam/stop 5.16
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.58
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.76
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
75 TestFunctional/serial/CacheCmd/cache/add_local 1.85
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 43.09
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.27
86 TestFunctional/serial/LogsFileCmd 1.3
87 TestFunctional/serial/InvalidService 4.51
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 15.34
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.96
97 TestFunctional/parallel/ServiceCmdConnect 11.52
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 33.6
101 TestFunctional/parallel/SSHCmd 0.43
102 TestFunctional/parallel/CpCmd 1.39
103 TestFunctional/parallel/MySQL 25.58
104 TestFunctional/parallel/FileSync 0.24
105 TestFunctional/parallel/CertSync 1.54
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
113 TestFunctional/parallel/License 0.25
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.27
120 TestFunctional/parallel/ServiceCmd/List 0.42
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
123 TestFunctional/parallel/ServiceCmd/Format 0.3
124 TestFunctional/parallel/ServiceCmd/URL 0.33
125 TestFunctional/parallel/Version/short 0.06
126 TestFunctional/parallel/Version/components 0.55
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
137 TestFunctional/parallel/ImageCommands/ImageBuild 3.55
138 TestFunctional/parallel/ImageCommands/Setup 1.92
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
143 TestFunctional/parallel/ProfileCmd/profile_list 0.34
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
145 TestFunctional/parallel/MountCmd/any-port 17.66
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.44
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.38
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.86
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
153 TestFunctional/parallel/MountCmd/specific-port 1.93
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 182.51
162 TestMultiControlPlane/serial/DeployApp 6.02
163 TestMultiControlPlane/serial/PingHostFromPods 1.14
164 TestMultiControlPlane/serial/AddWorkerNode 56.36
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
167 TestMultiControlPlane/serial/CopyFile 12.93
168 TestMultiControlPlane/serial/StopSecondaryNode 91.59
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
170 TestMultiControlPlane/serial/RestartSecondaryNode 39.83
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 469.53
173 TestMultiControlPlane/serial/DeleteSecondaryNode 6.48
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
175 TestMultiControlPlane/serial/StopCluster 272.94
176 TestMultiControlPlane/serial/RestartCluster 116.5
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
178 TestMultiControlPlane/serial/AddSecondaryNode 73.09
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
183 TestJSONOutput/start/Command 80.9
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.7
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.58
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.51
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 93.7
215 TestMountStart/serial/StartWithMountFirst 27.87
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 28.74
218 TestMountStart/serial/VerifyMountSecond 0.37
219 TestMountStart/serial/DeleteFirst 0.67
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 2.28
222 TestMountStart/serial/RestartStopped 22.8
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 110.92
227 TestMultiNode/serial/DeployApp2Nodes 4.86
228 TestMultiNode/serial/PingHostFrom2Pods 0.77
229 TestMultiNode/serial/AddNode 53.82
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.56
232 TestMultiNode/serial/CopyFile 7.16
233 TestMultiNode/serial/StopNode 2.1
234 TestMultiNode/serial/StartAfterStop 34.01
235 TestMultiNode/serial/RestartKeepsNodes 327.92
236 TestMultiNode/serial/DeleteNode 1.97
237 TestMultiNode/serial/StopMultiNode 182.05
238 TestMultiNode/serial/RestartMultiNode 107.08
239 TestMultiNode/serial/ValidateNameConflict 44.7
246 TestScheduledStopUnix 110.83
250 TestRunningBinaryUpgrade 210.34
252 TestKubernetesUpgrade 198.92
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 121.99
257 TestStoppedBinaryUpgrade/Setup 0.63
258 TestStoppedBinaryUpgrade/Upgrade 155.38
259 TestNoKubernetes/serial/StartWithStopK8s 67.72
260 TestNoKubernetes/serial/Start 35.46
269 TestPause/serial/Start 88.48
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
271 TestNoKubernetes/serial/ProfileList 16.04
272 TestNoKubernetes/serial/Stop 1.35
273 TestNoKubernetes/serial/StartNoArgs 30.89
274 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
276 TestPause/serial/SecondStartNoReconfiguration 79.97
284 TestNetworkPlugins/group/false 3.08
289 TestStartStop/group/old-k8s-version/serial/FirstStart 188.38
290 TestPause/serial/Pause 0.72
291 TestPause/serial/VerifyStatus 0.26
292 TestPause/serial/Unpause 0.7
293 TestPause/serial/PauseAgain 0.78
294 TestPause/serial/DeletePaused 1.03
295 TestPause/serial/VerifyDeletedResources 0.75
297 TestStartStop/group/no-preload/serial/FirstStart 103.16
299 TestStartStop/group/embed-certs/serial/FirstStart 118.5
300 TestStartStop/group/no-preload/serial/DeployApp 8.27
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
302 TestStartStop/group/no-preload/serial/Stop 91.14
303 TestStartStop/group/embed-certs/serial/DeployApp 8.27
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
305 TestStartStop/group/embed-certs/serial/Stop 91.03
308 TestStartStop/group/old-k8s-version/serial/DeployApp 8.39
309 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
310 TestStartStop/group/old-k8s-version/serial/Stop 91.1
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
314 TestStartStop/group/embed-certs/serial/SecondStart 323.4
315 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
316 TestStartStop/group/old-k8s-version/serial/SecondStart 158.88
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
320 TestStartStop/group/old-k8s-version/serial/Pause 2.5
322 TestStartStop/group/newest-cni/serial/FirstStart 47.14
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
325 TestStartStop/group/newest-cni/serial/Stop 2.37
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
327 TestStartStop/group/newest-cni/serial/SecondStart 35.67
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
331 TestStartStop/group/newest-cni/serial/Pause 2.51
332 TestNetworkPlugins/group/auto/Start 54.53
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.07
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
336 TestStartStop/group/embed-certs/serial/Pause 3.06
337 TestNetworkPlugins/group/kindnet/Start 65.09
338 TestNetworkPlugins/group/auto/KubeletFlags 0.21
339 TestNetworkPlugins/group/auto/NetCatPod 9.24
340 TestNetworkPlugins/group/auto/DNS 0.16
341 TestNetworkPlugins/group/auto/Localhost 0.13
342 TestNetworkPlugins/group/auto/HairPin 0.14
343 TestNetworkPlugins/group/calico/Start 82.07
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
346 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
347 TestNetworkPlugins/group/kindnet/DNS 0.16
348 TestNetworkPlugins/group/kindnet/Localhost 0.12
349 TestNetworkPlugins/group/kindnet/HairPin 0.13
350 TestNetworkPlugins/group/custom-flannel/Start 68.3
351 TestNetworkPlugins/group/calico/ControllerPod 6.01
352 TestNetworkPlugins/group/calico/KubeletFlags 0.22
353 TestNetworkPlugins/group/calico/NetCatPod 10.24
354 TestNetworkPlugins/group/calico/DNS 0.16
355 TestNetworkPlugins/group/calico/Localhost 0.11
356 TestNetworkPlugins/group/calico/HairPin 0.13
357 TestNetworkPlugins/group/flannel/Start 66.18
358 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.25
360 TestNetworkPlugins/group/custom-flannel/DNS 0.14
361 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
363 TestNetworkPlugins/group/bridge/Start 58.48
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
366 TestNetworkPlugins/group/flannel/NetCatPod 10.24
367 TestNetworkPlugins/group/flannel/DNS 0.14
368 TestNetworkPlugins/group/flannel/Localhost 0.11
369 TestNetworkPlugins/group/flannel/HairPin 0.11
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
371 TestNetworkPlugins/group/bridge/NetCatPod 9.24
372 TestNetworkPlugins/group/enable-default-cni/Start 83.64
373 TestNetworkPlugins/group/bridge/DNS 0.14
374 TestNetworkPlugins/group/bridge/Localhost 0.11
375 TestNetworkPlugins/group/bridge/HairPin 0.11
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.21
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (8.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-657092 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-657092 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (8.227831137s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0122 19:58:00.619868  158271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0122 19:58:00.619964  158271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-657092
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-657092: exit status 85 (60.91743ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-657092 | jenkins | v1.35.0 | 22 Jan 25 19:57 UTC |          |
	|         | -p download-only-657092        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 19:57:52
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 19:57:52.434092  158283 out.go:345] Setting OutFile to fd 1 ...
	I0122 19:57:52.434209  158283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 19:57:52.434218  158283 out.go:358] Setting ErrFile to fd 2...
	I0122 19:57:52.434222  158283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 19:57:52.434423  158283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	W0122 19:57:52.434562  158283 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20288-150966/.minikube/config/config.json: open /home/jenkins/minikube-integration/20288-150966/.minikube/config/config.json: no such file or directory
	I0122 19:57:52.435095  158283 out.go:352] Setting JSON to true
	I0122 19:57:52.436022  158283 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6007,"bootTime":1737569865,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 19:57:52.436122  158283 start.go:139] virtualization: kvm guest
	I0122 19:57:52.438616  158283 out.go:97] [download-only-657092] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0122 19:57:52.438724  158283 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball: no such file or directory
	I0122 19:57:52.438757  158283 notify.go:220] Checking for updates...
	I0122 19:57:52.440155  158283 out.go:169] MINIKUBE_LOCATION=20288
	I0122 19:57:52.441815  158283 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 19:57:52.443295  158283 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 19:57:52.444676  158283 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 19:57:52.446138  158283 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0122 19:57:52.448808  158283 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0122 19:57:52.449015  158283 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 19:57:52.484007  158283 out.go:97] Using the kvm2 driver based on user configuration
	I0122 19:57:52.484030  158283 start.go:297] selected driver: kvm2
	I0122 19:57:52.484036  158283 start.go:901] validating driver "kvm2" against <nil>
	I0122 19:57:52.484388  158283 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 19:57:52.484477  158283 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20288-150966/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0122 19:57:52.499904  158283 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0122 19:57:52.499973  158283 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0122 19:57:52.500510  158283 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0122 19:57:52.500655  158283 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0122 19:57:52.500693  158283 cni.go:84] Creating CNI manager for ""
	I0122 19:57:52.500744  158283 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0122 19:57:52.500753  158283 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0122 19:57:52.500801  158283 start.go:340] cluster config:
	{Name:download-only-657092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-657092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 19:57:52.500959  158283 iso.go:125] acquiring lock: {Name:mkc3bf0604e328871936621dd0e0cda10261a449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0122 19:57:52.502882  158283 out.go:97] Downloading VM boot image ...
	I0122 19:57:52.502914  158283 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0122 19:57:55.395484  158283 out.go:97] Starting "download-only-657092" primary control-plane node in "download-only-657092" cluster
	I0122 19:57:55.395524  158283 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0122 19:57:55.417021  158283 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0122 19:57:55.417058  158283 cache.go:56] Caching tarball of preloaded images
	I0122 19:57:55.417212  158283 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0122 19:57:55.419027  158283 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0122 19:57:55.419051  158283 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0122 19:57:55.442600  158283 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-657092 host does not exist
	  To start a cluster, run: "minikube start -p download-only-657092"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-657092
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (4.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-973116 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-973116 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (4.64694575s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (4.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0122 19:58:05.603306  158271 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0122 19:58:05.603350  158271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20288-150966/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-973116
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-973116: exit status 85 (64.106194ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-657092 | jenkins | v1.35.0 | 22 Jan 25 19:57 UTC |                     |
	|         | -p download-only-657092        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 22 Jan 25 19:58 UTC | 22 Jan 25 19:58 UTC |
	| delete  | -p download-only-657092        | download-only-657092 | jenkins | v1.35.0 | 22 Jan 25 19:58 UTC | 22 Jan 25 19:58 UTC |
	| start   | -o=json --download-only        | download-only-973116 | jenkins | v1.35.0 | 22 Jan 25 19:58 UTC |                     |
	|         | -p download-only-973116        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/22 19:58:00
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0122 19:58:00.996745  158499 out.go:345] Setting OutFile to fd 1 ...
	I0122 19:58:00.997266  158499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 19:58:00.997282  158499 out.go:358] Setting ErrFile to fd 2...
	I0122 19:58:00.997290  158499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 19:58:00.997738  158499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 19:58:00.998702  158499 out.go:352] Setting JSON to true
	I0122 19:58:00.999595  158499 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6016,"bootTime":1737569865,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 19:58:00.999689  158499 start.go:139] virtualization: kvm guest
	I0122 19:58:01.001850  158499 out.go:97] [download-only-973116] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 19:58:01.002023  158499 notify.go:220] Checking for updates...
	I0122 19:58:01.003607  158499 out.go:169] MINIKUBE_LOCATION=20288
	I0122 19:58:01.005094  158499 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 19:58:01.006469  158499 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 19:58:01.007899  158499 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 19:58:01.009279  158499 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-973116 host does not exist
	  To start a cluster, run: "minikube start -p download-only-973116"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-973116
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0122 19:58:06.196978  158271 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-452390 --alsologtostderr --binary-mirror http://127.0.0.1:41317 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-452390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-452390
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (87.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-449237 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-449237 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m25.953127347s)
helpers_test.go:175: Cleaning up "offline-containerd-449237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-449237
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-449237: (1.622224256s)
--- PASS: TestOffline (87.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-964261
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-964261: exit status 85 (52.281727ms)

                                                
                                                
-- stdout --
	* Profile "addons-964261" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-964261"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-964261
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-964261: exit status 85 (53.599874ms)

                                                
                                                
-- stdout --
	* Profile "addons-964261" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-964261"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-964261 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-964261 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m28.831331169s)
--- PASS: TestAddons/Setup (208.83s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 16.88511ms
addons_test.go:807: volcano-scheduler stabilized in 16.998533ms
addons_test.go:815: volcano-admission stabilized in 17.077986ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-4cvsp" [23fbf2b8-1f34-445a-af8d-249033030c04] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004226854s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-wtgnl" [373a1372-2cf5-4323-b3a4-68f107caea30] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003290327s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-9gq7d" [8fd7d2b1-209c-48ef-8d8d-fbcde4c43518] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003612304s
addons_test.go:842: (dbg) Run:  kubectl --context addons-964261 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-964261 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-964261 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [16ec3c23-7214-4344-b3d0-450ea84d5d5a] Pending
helpers_test.go:344: "test-job-nginx-0" [16ec3c23-7214-4344-b3d0-450ea84d5d5a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [16ec3c23-7214-4344-b3d0-450ea84d5d5a] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003621814s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-964261 addons disable volcano --alsologtostderr -v=1: (11.004562282s)
--- PASS: TestAddons/serial/Volcano (40.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-964261 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-964261 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-964261 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-964261 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c01e5b41-ef75-4f7f-ab76-2547f3a82df5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c01e5b41-ef75-4f7f-ab76-2547f3a82df5] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.005025561s
addons_test.go:633: (dbg) Run:  kubectl --context addons-964261 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-964261 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-964261 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.027579ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-7qwt9" [0b1baa65-e301-4c58-b754-8a6485f082e2] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.037753544s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vwq7k" [8286b785-cae3-42c4-9427-f17b049ddb99] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004008211s
addons_test.go:331: (dbg) Run:  kubectl --context addons-964261 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-964261 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-964261 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.346643122s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 ip
2025/01/22 20:02:49 [DEBUG] GET http://192.168.39.105:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-964261 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-964261 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-964261 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [417d2e46-d33d-4e8c-8554-55fa91f7ccd6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [417d2e46-d33d-4e8c-8554-55fa91f7ccd6] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004278242s
I0122 20:03:05.853823  158271 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-964261 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.105
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-964261 addons disable ingress-dns --alsologtostderr -v=1: (1.353532493s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-964261 addons disable ingress --alsologtostderr -v=1: (7.756341308s)
--- PASS: TestAddons/parallel/Ingress (20.32s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5x246" [5e2a646b-888c-45d5-a15c-4245d213cdb4] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00553116s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-964261 addons disable inspektor-gadget --alsologtostderr -v=1: (5.836920192s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.009441ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-2tctq" [1dfd0eea-96cf-44a4-937c-2d3f5f3bb66b] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003618612s
addons_test.go:402: (dbg) Run:  kubectl --context addons-964261 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0122 20:02:52.026757  158271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0122 20:02:52.032417  158271 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0122 20:02:52.032445  158271 kapi.go:107] duration metric: took 5.709057ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.718421ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-964261 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-964261 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c82b29f0-36f3-441c-88dc-ca7a59b3aaab] Pending
helpers_test.go:344: "task-pv-pod" [c82b29f0-36f3-441c-88dc-ca7a59b3aaab] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c82b29f0-36f3-441c-88dc-ca7a59b3aaab] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004382702s
addons_test.go:511: (dbg) Run:  kubectl --context addons-964261 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-964261 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-964261 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-964261 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-964261 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-964261 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-964261 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d5cfae79-5879-4083-bff5-d980dc9edc44] Pending
helpers_test.go:344: "task-pv-pod-restore" [d5cfae79-5879-4083-bff5-d980dc9edc44] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d5cfae79-5879-4083-bff5-d980dc9edc44] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004146481s
addons_test.go:553: (dbg) Run:  kubectl --context addons-964261 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-964261 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-964261 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-964261 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.777181328s)
--- PASS: TestAddons/parallel/CSI (59.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-964261 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-964261 --alsologtostderr -v=1: (1.019723853s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-zkqmf" [ff4a6f85-ada3-4b15-aee1-698f13c0c6a4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-zkqmf" [ff4a6f85-ada3-4b15-aee1-698f13c0c6a4] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004271577s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-964261 addons disable headlamp --alsologtostderr -v=1: (5.747534456s)
--- PASS: TestAddons/parallel/Headlamp (19.77s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-gzgkq" [c5173b53-dd7a-4b03-ba18-4b57ac61656c] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003578968s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-964261 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-964261 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-964261 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f3105852-7dd2-4f14-8134-e16590aa8bda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f3105852-7dd2-4f14-8134-e16590aa8bda] Running
helpers_test.go:344: "test-local-path" [f3105852-7dd2-4f14-8134-e16590aa8bda] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f3105852-7dd2-4f14-8134-e16590aa8bda] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004719233s
addons_test.go:906: (dbg) Run:  kubectl --context addons-964261 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 ssh "cat /opt/local-path-provisioner/pvc-37f2a762-c0e2-4b8d-b324-bb2f587ddb6b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-964261 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-964261 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-964261 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.49441701s)
--- PASS: TestAddons/parallel/LocalPath (56.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qxgmw" [6fff703c-66d5-4366-ba8d-bc1ee3dd3827] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.044539213s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-fjwpz" [a9edf523-c698-4e35-afdc-833483b30f0a] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005195608s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-964261 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-964261 addons disable yakd --alsologtostderr -v=1: (5.835469443s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-964261
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-964261: (1m30.972373702s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-964261
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-964261
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-964261
--- PASS: TestAddons/StoppedEnableDisable (91.26s)

                                                
                                    
x
+
TestCertOptions (46.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-504363 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-504363 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (44.114518575s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-504363 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-504363 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-504363 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-504363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-504363
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-504363: (1.566179888s)
--- PASS: TestCertOptions (46.18s)

                                                
                                    
x
+
TestCertExpiration (280.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-946533 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-946533 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m13.119899904s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-946533 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-946533 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (27.014625383s)
helpers_test.go:175: Cleaning up "cert-expiration-946533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-946533
--- PASS: TestCertExpiration (280.92s)

                                                
                                    
x
+
TestForceSystemdFlag (69.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-277306 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-277306 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m8.885813439s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-277306 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-277306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-277306
--- PASS: TestForceSystemdFlag (69.91s)

                                                
                                    
x
+
TestForceSystemdEnv (98.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-539692 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-539692 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m37.921973732s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-539692 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-539692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-539692
--- PASS: TestForceSystemdEnv (98.93s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0122 20:59:09.172809  158271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0122 20:59:09.172973  158271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0122 20:59:09.210587  158271 install.go:62] docker-machine-driver-kvm2: exit status 1
W0122 20:59:09.211059  158271 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0122 20:59:09.211125  158271 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate630733210/001/docker-machine-driver-kvm2
I0122 20:59:09.668695  158271 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate630733210/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000759ab0 gz:0xc000759ab8 tar:0xc000759a60 tar.bz2:0xc000759a70 tar.gz:0xc000759a80 tar.xz:0xc000759a90 tar.zst:0xc000759aa0 tbz2:0xc000759a70 tgz:0xc000759a80 txz:0xc000759a90 tzst:0xc000759aa0 xz:0xc000759ac0 zip:0xc000759ad0 zst:0xc000759ac8] Getters:map[file:0xc00029a4d0 http:0xc000550af0 https:0xc000550b40] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0122 20:59:09.668773  158271 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate630733210/001/docker-machine-driver-kvm2
I0122 20:59:12.540958  158271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0122 20:59:12.541060  158271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0122 20:59:12.575220  158271 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0122 20:59:12.575263  158271 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0122 20:59:12.575362  158271 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0122 20:59:12.575401  158271 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate630733210/002/docker-machine-driver-kvm2
I0122 20:59:12.927252  158271 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate630733210/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000759ab0 gz:0xc000759ab8 tar:0xc000759a60 tar.bz2:0xc000759a70 tar.gz:0xc000759a80 tar.xz:0xc000759a90 tar.zst:0xc000759aa0 tbz2:0xc000759a70 tgz:0xc000759a80 txz:0xc000759a90 tzst:0xc000759aa0 xz:0xc000759ac0 zip:0xc000759ad0 zst:0xc000759ac8] Getters:map[file:0xc0009f0320 http:0xc000877b30 https:0xc000877b80] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0122 20:59:12.927294  158271 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate630733210/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (6.04s)

                                                
                                    
x
+
TestErrorSpam/setup (43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-160221 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-160221 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-160221 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-160221 --driver=kvm2  --container-runtime=containerd: (42.996331577s)
--- PASS: TestErrorSpam/setup (43.00s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (5.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 stop: (1.490614947s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 stop: (1.735926517s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-160221 --log_dir /tmp/nospam-160221 stop: (1.933796638s)
--- PASS: TestErrorSpam/stop (5.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20288-150966/.minikube/files/etc/test/nested/copy/158271/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.58s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381178 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0122 20:06:35.703986  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:35.710354  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:35.721664  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:35.743018  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:35.784397  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:35.865884  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:36.027387  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:36.349076  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:36.991157  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:38.273051  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:40.835185  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:45.956522  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:06:56.198383  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:07:16.680282  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-381178 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m19.580968856s)
--- PASS: TestFunctional/serial/StartWithProxy (79.58s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0122 20:07:36.252132  158271 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381178 --alsologtostderr -v=8
E0122 20:07:57.642548  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-381178 --alsologtostderr -v=8: (43.75675748s)
functional_test.go:663: soft start took 43.757385043s for "functional-381178" cluster.
I0122 20:08:20.009244  158271 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (43.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-381178 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-381178 cache add registry.k8s.io/pause:3.1: (1.022683317s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-381178 cache add registry.k8s.io/pause:3.3: (1.105094791s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-381178 /tmp/TestFunctionalserialCacheCmdcacheadd_local3824276601/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 cache add minikube-local-cache-test:functional-381178
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-381178 cache add minikube-local-cache-test:functional-381178: (1.536934183s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 cache delete minikube-local-cache-test:functional-381178
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-381178
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.483966ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 kubectl -- --context functional-381178 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-381178 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381178 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-381178 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.086105269s)
functional_test.go:761: restart took 43.086247469s for "functional-381178" cluster.
I0122 20:09:10.362027  158271 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (43.09s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-381178 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-381178 logs: (1.273048982s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 logs --file /tmp/TestFunctionalserialLogsFileCmd1226816096/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-381178 logs --file /tmp/TestFunctionalserialLogsFileCmd1226816096/001/logs.txt: (1.300178264s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-381178 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-381178
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-381178: exit status 115 (266.232217ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.134:32319 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-381178 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-381178 delete -f testdata/invalidsvc.yaml: (1.04854093s)
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 config get cpus: exit status 14 (72.587687ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 config get cpus: exit status 14 (69.399938ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-381178 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-381178 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 166784: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381178 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-381178 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (138.241288ms)

                                                
                                                
-- stdout --
	* [functional-381178] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:09:39.397899  166625 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:09:39.398239  166625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:09:39.398255  166625 out.go:358] Setting ErrFile to fd 2...
	I0122 20:09:39.398264  166625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:09:39.398530  166625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 20:09:39.399061  166625 out.go:352] Setting JSON to false
	I0122 20:09:39.400030  166625 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6714,"bootTime":1737569865,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 20:09:39.400132  166625 start.go:139] virtualization: kvm guest
	I0122 20:09:39.402467  166625 out.go:177] * [functional-381178] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 20:09:39.403749  166625 notify.go:220] Checking for updates...
	I0122 20:09:39.403777  166625 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 20:09:39.405036  166625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 20:09:39.406327  166625 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 20:09:39.407508  166625 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 20:09:39.408788  166625 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 20:09:39.410081  166625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 20:09:39.411777  166625 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 20:09:39.412199  166625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:09:39.412263  166625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:09:39.427578  166625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I0122 20:09:39.428015  166625 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:09:39.428547  166625 main.go:141] libmachine: Using API Version  1
	I0122 20:09:39.428609  166625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:09:39.428922  166625 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:09:39.429116  166625 main.go:141] libmachine: (functional-381178) Calling .DriverName
	I0122 20:09:39.429330  166625 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 20:09:39.429631  166625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:09:39.429666  166625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:09:39.445070  166625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0122 20:09:39.445472  166625 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:09:39.445890  166625 main.go:141] libmachine: Using API Version  1
	I0122 20:09:39.445914  166625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:09:39.446258  166625 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:09:39.446462  166625 main.go:141] libmachine: (functional-381178) Calling .DriverName
	I0122 20:09:39.481639  166625 out.go:177] * Using the kvm2 driver based on existing profile
	I0122 20:09:39.482918  166625 start.go:297] selected driver: kvm2
	I0122 20:09:39.482933  166625 start.go:901] validating driver "kvm2" against &{Name:functional-381178 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-381178 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 20:09:39.483021  166625 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 20:09:39.485076  166625 out.go:201] 
	W0122 20:09:39.486425  166625 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0122 20:09:39.487751  166625 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381178 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-381178 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-381178 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (156.519726ms)

                                                
                                                
-- stdout --
	* [functional-381178] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:09:39.683678  166682 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:09:39.683801  166682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:09:39.683813  166682 out.go:358] Setting ErrFile to fd 2...
	I0122 20:09:39.683819  166682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:09:39.684120  166682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 20:09:39.684695  166682 out.go:352] Setting JSON to false
	I0122 20:09:39.685691  166682 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6715,"bootTime":1737569865,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 20:09:39.685805  166682 start.go:139] virtualization: kvm guest
	I0122 20:09:39.687949  166682 out.go:177] * [functional-381178] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0122 20:09:39.689588  166682 notify.go:220] Checking for updates...
	I0122 20:09:39.689600  166682 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 20:09:39.691032  166682 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 20:09:39.692395  166682 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 20:09:39.693815  166682 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 20:09:39.695098  166682 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 20:09:39.696355  166682 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 20:09:39.698191  166682 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 20:09:39.698749  166682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:09:39.698822  166682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:09:39.714117  166682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0122 20:09:39.714629  166682 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:09:39.715136  166682 main.go:141] libmachine: Using API Version  1
	I0122 20:09:39.715158  166682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:09:39.715494  166682 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:09:39.715650  166682 main.go:141] libmachine: (functional-381178) Calling .DriverName
	I0122 20:09:39.715886  166682 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 20:09:39.716163  166682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:09:39.716197  166682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:09:39.731729  166682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I0122 20:09:39.732321  166682 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:09:39.732832  166682 main.go:141] libmachine: Using API Version  1
	I0122 20:09:39.732861  166682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:09:39.733252  166682 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:09:39.733487  166682 main.go:141] libmachine: (functional-381178) Calling .DriverName
	I0122 20:09:39.773147  166682 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0122 20:09:39.782005  166682 start.go:297] selected driver: kvm2
	I0122 20:09:39.782023  166682 start.go:901] validating driver "kvm2" against &{Name:functional-381178 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-381178 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.134 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0122 20:09:39.782165  166682 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 20:09:39.784719  166682 out.go:201] 
	W0122 20:09:39.786149  166682 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0122 20:09:39.787377  166682 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-381178 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-381178 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-42w6q" [4c0a3afd-753f-4958-9d20-6b2cdeb6b884] Pending
E0122 20:09:19.564489  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-58f9cf68d8-42w6q" [4c0a3afd-753f-4958-9d20-6b2cdeb6b884] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-42w6q" [4c0a3afd-753f-4958-9d20-6b2cdeb6b884] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00496027s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.134:31858
functional_test.go:1675: http://192.168.39.134:31858: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-42w6q

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.134:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.134:31858
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [88a76b5d-c23a-4ad8-84ae-4e0f3df26601] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004310583s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-381178 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-381178 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-381178 get pvc myclaim -o=json
I0122 20:09:24.717030  158271 retry.go:31] will retry after 2.035375505s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:c8bee5c3-93a7-4803-8fae-c720f88777f2 ResourceVersion:765 Generation:0 CreationTimestamp:2025-01-22 20:09:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0019baee0 VolumeMode:0xc0019baef0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-381178 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-381178 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [225eb9f0-54d5-4c49-a51a-fceec1dbf2ec] Pending
helpers_test.go:344: "sp-pod" [225eb9f0-54d5-4c49-a51a-fceec1dbf2ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [225eb9f0-54d5-4c49-a51a-fceec1dbf2ec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003932303s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-381178 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-381178 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-381178 delete -f testdata/storage-provisioner/pod.yaml: (1.628523422s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-381178 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2355ba42-3d14-4725-a5f9-faa6034e3193] Pending
helpers_test.go:344: "sp-pod" [2355ba42-3d14-4725-a5f9-faa6034e3193] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2355ba42-3d14-4725-a5f9-faa6034e3193] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.013336217s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-381178 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh -n functional-381178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 cp functional-381178:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2919971355/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh -n functional-381178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh -n functional-381178 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-381178 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-mxd95" [c865ccd4-f631-4a07-8e90-837a4b36a22c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-mxd95" [c865ccd4-f631-4a07-8e90-837a4b36a22c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003443644s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-381178 exec mysql-58ccfd96bb-mxd95 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-381178 exec mysql-58ccfd96bb-mxd95 -- mysql -ppassword -e "show databases;": exit status 1 (153.855693ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0122 20:09:50.375320  158271 retry.go:31] will retry after 1.271957542s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-381178 exec mysql-58ccfd96bb-mxd95 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-381178 exec mysql-58ccfd96bb-mxd95 -- mysql -ppassword -e "show databases;": exit status 1 (288.27049ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0122 20:09:51.936594  158271 retry.go:31] will retry after 1.760293249s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-381178 exec mysql-58ccfd96bb-mxd95 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-381178 exec mysql-58ccfd96bb-mxd95 -- mysql -ppassword -e "show databases;": exit status 1 (161.312225ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0122 20:09:53.859379  158271 retry.go:31] will retry after 1.519205293s: exit status 1
2025/01/22 20:09:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1807: (dbg) Run:  kubectl --context functional-381178 exec mysql-58ccfd96bb-mxd95 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-381178 exec mysql-58ccfd96bb-mxd95 -- mysql -ppassword -e "show databases;": exit status 1 (113.578741ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0122 20:09:55.493112  158271 retry.go:31] will retry after 1.907659752s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-381178 exec mysql-58ccfd96bb-mxd95 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/158271/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo cat /etc/test/nested/copy/158271/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/158271.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo cat /etc/ssl/certs/158271.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/158271.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo cat /usr/share/ca-certificates/158271.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/1582712.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo cat /etc/ssl/certs/1582712.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/1582712.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo cat /usr/share/ca-certificates/1582712.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-381178 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 ssh "sudo systemctl is-active docker": exit status 1 (206.673494ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 ssh "sudo systemctl is-active crio": exit status 1 (211.346638ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-381178 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-381178 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-587qf" [6a04bf98-5c4d-465b-8374-d6cb588793cd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-587qf" [6a04bf98-5c4d-465b-8374-d6cb588793cd] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00438719s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-381178 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-381178 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-381178 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 165096: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-381178 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-381178 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-381178 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5fabd4c5-de31-44a7-ba7f-d7201d330fcf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5fabd4c5-de31-44a7-ba7f-d7201d330fcf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004689078s
I0122 20:09:30.033036  158271 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 service list -o json
functional_test.go:1494: Took "416.761727ms" to run "out/minikube-linux-amd64 -p functional-381178 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.134:30467
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.134:30467
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-381178 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.11.138 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-381178 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-381178 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-381178
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-381178
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381178 image ls --format short --alsologtostderr:
I0122 20:09:53.050447  167418 out.go:345] Setting OutFile to fd 1 ...
I0122 20:09:53.050543  167418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.050548  167418 out.go:358] Setting ErrFile to fd 2...
I0122 20:09:53.050552  167418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.050738  167418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
I0122 20:09:53.051337  167418 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.051431  167418 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.051902  167418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.051973  167418 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.067261  167418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
I0122 20:09:53.067854  167418 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.068663  167418 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.068687  167418 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.069209  167418 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.069419  167418 main.go:141] libmachine: (functional-381178) Calling .GetState
I0122 20:09:53.071596  167418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.071647  167418 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.086263  167418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
I0122 20:09:53.086651  167418 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.087120  167418 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.087134  167418 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.087440  167418 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.087658  167418 main.go:141] libmachine: (functional-381178) Calling .DriverName
I0122 20:09:53.087859  167418 ssh_runner.go:195] Run: systemctl --version
I0122 20:09:53.087889  167418 main.go:141] libmachine: (functional-381178) Calling .GetSSHHostname
I0122 20:09:53.091041  167418 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.091455  167418 main.go:141] libmachine: (functional-381178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:64:24", ip: ""} in network mk-functional-381178: {Iface:virbr1 ExpiryTime:2025-01-22 21:06:30 +0000 UTC Type:0 Mac:52:54:00:1e:64:24 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:functional-381178 Clientid:01:52:54:00:1e:64:24}
I0122 20:09:53.091486  167418 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined IP address 192.168.39.134 and MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.091726  167418 main.go:141] libmachine: (functional-381178) Calling .GetSSHPort
I0122 20:09:53.091891  167418 main.go:141] libmachine: (functional-381178) Calling .GetSSHKeyPath
I0122 20:09:53.092039  167418 main.go:141] libmachine: (functional-381178) Calling .GetSSHUsername
I0122 20:09:53.092159  167418 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/functional-381178/id_rsa Username:docker}
I0122 20:09:53.168077  167418 ssh_runner.go:195] Run: sudo crictl images --output json
I0122 20:09:53.213366  167418 main.go:141] libmachine: Making call to close driver server
I0122 20:09:53.213383  167418 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:53.213638  167418 main.go:141] libmachine: (functional-381178) DBG | Closing plugin on server side
I0122 20:09:53.213680  167418 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:53.213693  167418 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:09:53.213706  167418 main.go:141] libmachine: Making call to close driver server
I0122 20:09:53.213718  167418 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:53.214009  167418 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:53.214032  167418 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-381178 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:019ee1 | 26.3MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:2b0d65 | 20.7MB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e29f9c | 30.9MB |
| docker.io/library/minikube-local-cache-test | functional-381178  | sha256:e3ece3 | 992B   |
| docker.io/library/nginx                     | alpine             | sha256:93f9c7 | 20.5MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:95c0bd | 28.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kicbase/echo-server               | functional-381178  | sha256:9056ab | 2.37MB |
| docker.io/library/nginx                     | latest             | sha256:9bea9f | 72.1MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381178 image ls --format table --alsologtostderr:
I0122 20:09:53.487376  167529 out.go:345] Setting OutFile to fd 1 ...
I0122 20:09:53.487474  167529 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.487481  167529 out.go:358] Setting ErrFile to fd 2...
I0122 20:09:53.487486  167529 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.488119  167529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
I0122 20:09:53.489505  167529 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.489670  167529 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.490244  167529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.490294  167529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.505090  167529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
I0122 20:09:53.505623  167529 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.506214  167529 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.506245  167529 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.506598  167529 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.506826  167529 main.go:141] libmachine: (functional-381178) Calling .GetState
I0122 20:09:53.508861  167529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.508893  167529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.527597  167529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
I0122 20:09:53.528062  167529 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.528579  167529 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.528600  167529 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.528946  167529 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.529204  167529 main.go:141] libmachine: (functional-381178) Calling .DriverName
I0122 20:09:53.529437  167529 ssh_runner.go:195] Run: systemctl --version
I0122 20:09:53.529468  167529 main.go:141] libmachine: (functional-381178) Calling .GetSSHHostname
I0122 20:09:53.532688  167529 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.533164  167529 main.go:141] libmachine: (functional-381178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:64:24", ip: ""} in network mk-functional-381178: {Iface:virbr1 ExpiryTime:2025-01-22 21:06:30 +0000 UTC Type:0 Mac:52:54:00:1e:64:24 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:functional-381178 Clientid:01:52:54:00:1e:64:24}
I0122 20:09:53.533186  167529 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined IP address 192.168.39.134 and MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.533408  167529 main.go:141] libmachine: (functional-381178) Calling .GetSSHPort
I0122 20:09:53.533651  167529 main.go:141] libmachine: (functional-381178) Calling .GetSSHKeyPath
I0122 20:09:53.533803  167529 main.go:141] libmachine: (functional-381178) Calling .GetSSHUsername
I0122 20:09:53.534006  167529 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/functional-381178/id_rsa Username:docker}
I0122 20:09:53.614724  167529 ssh_runner.go:195] Run: sudo crictl images --output json
I0122 20:09:53.665384  167529 main.go:141] libmachine: Making call to close driver server
I0122 20:09:53.665407  167529 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:53.665715  167529 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:53.665741  167529 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:09:53.665749  167529 main.go:141] libmachine: Making call to close driver server
I0122 20:09:53.665748  167529 main.go:141] libmachine: (functional-381178) DBG | Closing plugin on server side
I0122 20:09:53.665756  167529 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:53.665998  167529 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:53.666017  167529 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:09:53.666025  167529 main.go:141] libmachine: (functional-381178) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-381178 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:e3ece31b4205c9d0e407450523b7564a5cf180e857ac376856d8fc2361cb4604","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-381178"],"size":"992"},{"id":"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"72080558"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f9
5724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"28671624"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"s
ha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"30908485"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-381178"],"size":"2372971"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3","repoDigests":["docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333
144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"20534112"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"26258470"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags
":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"20657536"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381178 image ls --format json --alsologtostderr:
I0122 20:09:53.268940  167467 out.go:345] Setting OutFile to fd 1 ...
I0122 20:09:53.269193  167467 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.269203  167467 out.go:358] Setting ErrFile to fd 2...
I0122 20:09:53.269207  167467 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.269395  167467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
I0122 20:09:53.269983  167467 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.270100  167467 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.270458  167467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.270525  167467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.285518  167467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35485
I0122 20:09:53.285930  167467 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.286438  167467 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.286466  167467 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.286797  167467 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.287018  167467 main.go:141] libmachine: (functional-381178) Calling .GetState
I0122 20:09:53.288776  167467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.288815  167467 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.304443  167467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39707
I0122 20:09:53.305019  167467 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.305583  167467 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.305615  167467 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.305995  167467 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.306203  167467 main.go:141] libmachine: (functional-381178) Calling .DriverName
I0122 20:09:53.306407  167467 ssh_runner.go:195] Run: systemctl --version
I0122 20:09:53.306432  167467 main.go:141] libmachine: (functional-381178) Calling .GetSSHHostname
I0122 20:09:53.309387  167467 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.309854  167467 main.go:141] libmachine: (functional-381178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:64:24", ip: ""} in network mk-functional-381178: {Iface:virbr1 ExpiryTime:2025-01-22 21:06:30 +0000 UTC Type:0 Mac:52:54:00:1e:64:24 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:functional-381178 Clientid:01:52:54:00:1e:64:24}
I0122 20:09:53.309882  167467 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined IP address 192.168.39.134 and MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.310012  167467 main.go:141] libmachine: (functional-381178) Calling .GetSSHPort
I0122 20:09:53.310260  167467 main.go:141] libmachine: (functional-381178) Calling .GetSSHKeyPath
I0122 20:09:53.310413  167467 main.go:141] libmachine: (functional-381178) Calling .GetSSHUsername
I0122 20:09:53.310528  167467 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/functional-381178/id_rsa Username:docker}
I0122 20:09:53.388218  167467 ssh_runner.go:195] Run: sudo crictl images --output json
I0122 20:09:53.431744  167467 main.go:141] libmachine: Making call to close driver server
I0122 20:09:53.431757  167467 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:53.432026  167467 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:53.432048  167467 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:09:53.432056  167467 main.go:141] libmachine: Making call to close driver server
I0122 20:09:53.432058  167467 main.go:141] libmachine: (functional-381178) DBG | Closing plugin on server side
I0122 20:09:53.432064  167467 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:53.432370  167467 main.go:141] libmachine: (functional-381178) DBG | Closing plugin on server side
I0122 20:09:53.432400  167467 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:53.432411  167467 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-381178 image ls --format yaml --alsologtostderr:
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3
repoDigests:
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "20534112"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "26258470"
- id: sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "30908485"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:e3ece31b4205c9d0e407450523b7564a5cf180e857ac376856d8fc2361cb4604
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-381178
size: "992"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "72080558"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-381178
size: "2372971"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "28671624"
- id: sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "20657536"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381178 image ls --format yaml --alsologtostderr:
I0122 20:09:53.051413  167419 out.go:345] Setting OutFile to fd 1 ...
I0122 20:09:53.051539  167419 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.051551  167419 out.go:358] Setting ErrFile to fd 2...
I0122 20:09:53.051557  167419 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.051728  167419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
I0122 20:09:53.052344  167419 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.052469  167419 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.052885  167419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.052937  167419 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.067436  167419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
I0122 20:09:53.067903  167419 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.068473  167419 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.068492  167419 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.069098  167419 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.069256  167419 main.go:141] libmachine: (functional-381178) Calling .GetState
I0122 20:09:53.071351  167419 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.071397  167419 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.086062  167419 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
I0122 20:09:53.086497  167419 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.087109  167419 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.087132  167419 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.087506  167419 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.087722  167419 main.go:141] libmachine: (functional-381178) Calling .DriverName
I0122 20:09:53.087928  167419 ssh_runner.go:195] Run: systemctl --version
I0122 20:09:53.087955  167419 main.go:141] libmachine: (functional-381178) Calling .GetSSHHostname
I0122 20:09:53.090844  167419 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.091255  167419 main.go:141] libmachine: (functional-381178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:64:24", ip: ""} in network mk-functional-381178: {Iface:virbr1 ExpiryTime:2025-01-22 21:06:30 +0000 UTC Type:0 Mac:52:54:00:1e:64:24 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:functional-381178 Clientid:01:52:54:00:1e:64:24}
I0122 20:09:53.091318  167419 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined IP address 192.168.39.134 and MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.091386  167419 main.go:141] libmachine: (functional-381178) Calling .GetSSHPort
I0122 20:09:53.091638  167419 main.go:141] libmachine: (functional-381178) Calling .GetSSHKeyPath
I0122 20:09:53.091791  167419 main.go:141] libmachine: (functional-381178) Calling .GetSSHUsername
I0122 20:09:53.091968  167419 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/functional-381178/id_rsa Username:docker}
I0122 20:09:53.168103  167419 ssh_runner.go:195] Run: sudo crictl images --output json
I0122 20:09:53.213169  167419 main.go:141] libmachine: Making call to close driver server
I0122 20:09:53.213200  167419 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:53.213502  167419 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:53.213527  167419 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:09:53.213527  167419 main.go:141] libmachine: (functional-381178) DBG | Closing plugin on server side
I0122 20:09:53.213534  167419 main.go:141] libmachine: Making call to close driver server
I0122 20:09:53.213542  167419 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:53.213734  167419 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:53.213748  167419 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 ssh pgrep buildkitd: exit status 1 (198.257425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image build -t localhost/my-image:functional-381178 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-381178 image build -t localhost/my-image:functional-381178 testdata/build --alsologtostderr: (3.140384612s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-381178 image build -t localhost/my-image:functional-381178 testdata/build --alsologtostderr:
I0122 20:09:53.467465  167519 out.go:345] Setting OutFile to fd 1 ...
I0122 20:09:53.467615  167519 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.467626  167519 out.go:358] Setting ErrFile to fd 2...
I0122 20:09:53.467633  167519 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0122 20:09:53.467814  167519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
I0122 20:09:53.468423  167519 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.469023  167519 config.go:182] Loaded profile config "functional-381178": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0122 20:09:53.469440  167519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.469485  167519 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.486160  167519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
I0122 20:09:53.486681  167519 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.487292  167519 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.487308  167519 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.487647  167519 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.487841  167519 main.go:141] libmachine: (functional-381178) Calling .GetState
I0122 20:09:53.490615  167519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0122 20:09:53.490657  167519 main.go:141] libmachine: Launching plugin server for driver kvm2
I0122 20:09:53.505382  167519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34105
I0122 20:09:53.506088  167519 main.go:141] libmachine: () Calling .GetVersion
I0122 20:09:53.506703  167519 main.go:141] libmachine: Using API Version  1
I0122 20:09:53.506737  167519 main.go:141] libmachine: () Calling .SetConfigRaw
I0122 20:09:53.507108  167519 main.go:141] libmachine: () Calling .GetMachineName
I0122 20:09:53.507276  167519 main.go:141] libmachine: (functional-381178) Calling .DriverName
I0122 20:09:53.507467  167519 ssh_runner.go:195] Run: systemctl --version
I0122 20:09:53.507491  167519 main.go:141] libmachine: (functional-381178) Calling .GetSSHHostname
I0122 20:09:53.510236  167519 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.510677  167519 main.go:141] libmachine: (functional-381178) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:64:24", ip: ""} in network mk-functional-381178: {Iface:virbr1 ExpiryTime:2025-01-22 21:06:30 +0000 UTC Type:0 Mac:52:54:00:1e:64:24 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:functional-381178 Clientid:01:52:54:00:1e:64:24}
I0122 20:09:53.510710  167519 main.go:141] libmachine: (functional-381178) DBG | domain functional-381178 has defined IP address 192.168.39.134 and MAC address 52:54:00:1e:64:24 in network mk-functional-381178
I0122 20:09:53.510867  167519 main.go:141] libmachine: (functional-381178) Calling .GetSSHPort
I0122 20:09:53.511031  167519 main.go:141] libmachine: (functional-381178) Calling .GetSSHKeyPath
I0122 20:09:53.511199  167519 main.go:141] libmachine: (functional-381178) Calling .GetSSHUsername
I0122 20:09:53.511358  167519 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/functional-381178/id_rsa Username:docker}
I0122 20:09:53.602798  167519 build_images.go:161] Building image from path: /tmp/build.940640118.tar
I0122 20:09:53.602866  167519 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0122 20:09:53.613987  167519 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.940640118.tar
I0122 20:09:53.618579  167519 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.940640118.tar: stat -c "%s %y" /var/lib/minikube/build/build.940640118.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.940640118.tar': No such file or directory
I0122 20:09:53.618616  167519 ssh_runner.go:362] scp /tmp/build.940640118.tar --> /var/lib/minikube/build/build.940640118.tar (3072 bytes)
I0122 20:09:53.646000  167519 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.940640118
I0122 20:09:53.657169  167519 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.940640118 -xf /var/lib/minikube/build/build.940640118.tar
I0122 20:09:53.672268  167519 containerd.go:394] Building image: /var/lib/minikube/build/build.940640118
I0122 20:09:53.672345  167519 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.940640118 --local dockerfile=/var/lib/minikube/build/build.940640118 --output type=image,name=localhost/my-image:functional-381178
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:4afe2d9fb6400b57f9fb99f94f274d93cfd4ac2f7eecaf80da9ff0870bf6bb43
#8 exporting manifest sha256:4afe2d9fb6400b57f9fb99f94f274d93cfd4ac2f7eecaf80da9ff0870bf6bb43 0.0s done
#8 exporting config sha256:65061de5c0b96c3c00a248a7e81edab7db0f0ed014f717610a5c0c11df81f47b 0.0s done
#8 naming to localhost/my-image:functional-381178 done
#8 DONE 0.2s
I0122 20:09:56.527941  167519 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.940640118 --local dockerfile=/var/lib/minikube/build/build.940640118 --output type=image,name=localhost/my-image:functional-381178: (2.855561556s)
I0122 20:09:56.528028  167519 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.940640118
I0122 20:09:56.540811  167519 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.940640118.tar
I0122 20:09:56.551030  167519 build_images.go:217] Built localhost/my-image:functional-381178 from /tmp/build.940640118.tar
I0122 20:09:56.551066  167519 build_images.go:133] succeeded building to: functional-381178
I0122 20:09:56.551071  167519 build_images.go:134] failed building to: 
I0122 20:09:56.551094  167519 main.go:141] libmachine: Making call to close driver server
I0122 20:09:56.551106  167519 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:56.551406  167519 main.go:141] libmachine: (functional-381178) DBG | Closing plugin on server side
I0122 20:09:56.551462  167519 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:56.551479  167519 main.go:141] libmachine: Making call to close connection to plugin binary
I0122 20:09:56.551494  167519 main.go:141] libmachine: Making call to close driver server
I0122 20:09:56.551502  167519 main.go:141] libmachine: (functional-381178) Calling .Close
I0122 20:09:56.551736  167519 main.go:141] libmachine: (functional-381178) DBG | Closing plugin on server side
I0122 20:09:56.551768  167519 main.go:141] libmachine: Successfully made call to close driver server
I0122 20:09:56.551794  167519 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.891674918s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-381178
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "290.446701ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.418729ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "283.055408ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "71.212648ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdany-port250544563/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737576571866743588" to /tmp/TestFunctionalparallelMountCmdany-port250544563/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737576571866743588" to /tmp/TestFunctionalparallelMountCmdany-port250544563/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737576571866743588" to /tmp/TestFunctionalparallelMountCmdany-port250544563/001/test-1737576571866743588
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (234.453343ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0122 20:09:32.101527  158271 retry.go:31] will retry after 617.917197ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 22 20:09 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 22 20:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 22 20:09 test-1737576571866743588
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh cat /mount-9p/test-1737576571866743588
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-381178 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8cd5ce67-8d4a-4c1c-9c46-2381ef7cafc2] Pending
helpers_test.go:344: "busybox-mount" [8cd5ce67-8d4a-4c1c-9c46-2381ef7cafc2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8cd5ce67-8d4a-4c1c-9c46-2381ef7cafc2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8cd5ce67-8d4a-4c1c-9c46-2381ef7cafc2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.003269618s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-381178 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdany-port250544563/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image load --daemon kicbase/echo-server:functional-381178 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-381178 image load --daemon kicbase/echo-server:functional-381178 --alsologtostderr: (1.200034637s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image load --daemon kicbase/echo-server:functional-381178 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-381178
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image load --daemon kicbase/echo-server:functional-381178 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-381178 image load --daemon kicbase/echo-server:functional-381178 --alsologtostderr: (1.462882968s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image save kicbase/echo-server:functional-381178 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image rm kicbase/echo-server:functional-381178 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-381178
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 image save --daemon kicbase/echo-server:functional-381178 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-381178
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdspecific-port3368441623/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.701985ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0122 20:09:49.798668  158271 retry.go:31] will retry after 613.842973ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdspecific-port3368441623/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 ssh "sudo umount -f /mount-9p": exit status 1 (212.993062ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-381178 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdspecific-port3368441623/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3543107603/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3543107603/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3543107603/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T" /mount1: exit status 1 (339.859424ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0122 20:09:51.795467  158271 retry.go:31] will retry after 522.430237ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-381178 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-381178 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3543107603/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3543107603/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-381178 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3543107603/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-381178
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-381178
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-381178
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (182.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-362449 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0122 20:11:35.695205  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:12:03.406017  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-362449 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m1.849954415s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (182.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-362449 -- rollout status deployment/busybox: (3.900599517s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-ct6xk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-tslfn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-zgvj6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-ct6xk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-tslfn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-zgvj6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-ct6xk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-tslfn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-zgvj6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-ct6xk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-ct6xk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-tslfn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-tslfn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-zgvj6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-362449 -- exec busybox-58667487b6-zgvj6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-362449 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-362449 -v=7 --alsologtostderr: (55.48864148s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-362449 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp testdata/cp-test.txt ha-362449:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2201953261/001/cp-test_ha-362449.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449:/home/docker/cp-test.txt ha-362449-m02:/home/docker/cp-test_ha-362449_ha-362449-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m02 "sudo cat /home/docker/cp-test_ha-362449_ha-362449-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449:/home/docker/cp-test.txt ha-362449-m03:/home/docker/cp-test_ha-362449_ha-362449-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m03 "sudo cat /home/docker/cp-test_ha-362449_ha-362449-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449:/home/docker/cp-test.txt ha-362449-m04:/home/docker/cp-test_ha-362449_ha-362449-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m04 "sudo cat /home/docker/cp-test_ha-362449_ha-362449-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp testdata/cp-test.txt ha-362449-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2201953261/001/cp-test_ha-362449-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m02:/home/docker/cp-test.txt ha-362449:/home/docker/cp-test_ha-362449-m02_ha-362449.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449 "sudo cat /home/docker/cp-test_ha-362449-m02_ha-362449.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m02:/home/docker/cp-test.txt ha-362449-m03:/home/docker/cp-test_ha-362449-m02_ha-362449-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m03 "sudo cat /home/docker/cp-test_ha-362449-m02_ha-362449-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m02:/home/docker/cp-test.txt ha-362449-m04:/home/docker/cp-test_ha-362449-m02_ha-362449-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m04 "sudo cat /home/docker/cp-test_ha-362449-m02_ha-362449-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp testdata/cp-test.txt ha-362449-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2201953261/001/cp-test_ha-362449-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m03:/home/docker/cp-test.txt ha-362449:/home/docker/cp-test_ha-362449-m03_ha-362449.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449 "sudo cat /home/docker/cp-test_ha-362449-m03_ha-362449.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m03:/home/docker/cp-test.txt ha-362449-m02:/home/docker/cp-test_ha-362449-m03_ha-362449-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m02 "sudo cat /home/docker/cp-test_ha-362449-m03_ha-362449-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m03:/home/docker/cp-test.txt ha-362449-m04:/home/docker/cp-test_ha-362449-m03_ha-362449-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m04 "sudo cat /home/docker/cp-test_ha-362449-m03_ha-362449-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp testdata/cp-test.txt ha-362449-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2201953261/001/cp-test_ha-362449-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m04:/home/docker/cp-test.txt ha-362449:/home/docker/cp-test_ha-362449-m04_ha-362449.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449 "sudo cat /home/docker/cp-test_ha-362449-m04_ha-362449.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m04:/home/docker/cp-test.txt ha-362449-m02:/home/docker/cp-test_ha-362449-m04_ha-362449-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m02 "sudo cat /home/docker/cp-test_ha-362449-m04_ha-362449-m02.txt"
E0122 20:14:17.731368  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:17.738136  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 cp ha-362449-m04:/home/docker/cp-test.txt ha-362449-m03:/home/docker/cp-test_ha-362449-m04_ha-362449-m03.txt
E0122 20:14:17.749419  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:17.770907  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:17.812347  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:17.893795  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:18.055354  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 ssh -n ha-362449-m03 "sudo cat /home/docker/cp-test_ha-362449-m04_ha-362449-m03.txt"
E0122 20:14:18.377375  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 node stop m02 -v=7 --alsologtostderr
E0122 20:14:19.019565  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:20.301267  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:22.863406  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:27.984775  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:38.226986  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:14:58.708683  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:15:39.670133  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-362449 node stop m02 -v=7 --alsologtostderr: (1m30.975753744s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr: exit status 7 (615.802408ms)

                                                
                                                
-- stdout --
	ha-362449
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-362449-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-362449-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-362449-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:15:49.492789  172105 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:15:49.492933  172105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:15:49.492945  172105 out.go:358] Setting ErrFile to fd 2...
	I0122 20:15:49.492951  172105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:15:49.493136  172105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 20:15:49.493328  172105 out.go:352] Setting JSON to false
	I0122 20:15:49.493376  172105 mustload.go:65] Loading cluster: ha-362449
	I0122 20:15:49.493468  172105 notify.go:220] Checking for updates...
	I0122 20:15:49.493912  172105 config.go:182] Loaded profile config "ha-362449": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 20:15:49.493939  172105 status.go:174] checking status of ha-362449 ...
	I0122 20:15:49.494434  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.494482  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.517028  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37535
	I0122 20:15:49.517493  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.518117  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.518147  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.518475  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.518702  172105 main.go:141] libmachine: (ha-362449) Calling .GetState
	I0122 20:15:49.520383  172105 status.go:371] ha-362449 host status = "Running" (err=<nil>)
	I0122 20:15:49.520402  172105 host.go:66] Checking if "ha-362449" exists ...
	I0122 20:15:49.520754  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.520796  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.536007  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40919
	I0122 20:15:49.536462  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.537074  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.537096  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.537446  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.537670  172105 main.go:141] libmachine: (ha-362449) Calling .GetIP
	I0122 20:15:49.540647  172105 main.go:141] libmachine: (ha-362449) DBG | domain ha-362449 has defined MAC address 52:54:00:cb:be:9c in network mk-ha-362449
	I0122 20:15:49.541086  172105 main.go:141] libmachine: (ha-362449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:be:9c", ip: ""} in network mk-ha-362449: {Iface:virbr1 ExpiryTime:2025-01-22 21:10:13 +0000 UTC Type:0 Mac:52:54:00:cb:be:9c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-362449 Clientid:01:52:54:00:cb:be:9c}
	I0122 20:15:49.541110  172105 main.go:141] libmachine: (ha-362449) DBG | domain ha-362449 has defined IP address 192.168.39.245 and MAC address 52:54:00:cb:be:9c in network mk-ha-362449
	I0122 20:15:49.541270  172105 host.go:66] Checking if "ha-362449" exists ...
	I0122 20:15:49.541563  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.541607  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.558528  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0122 20:15:49.558941  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.559370  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.559393  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.559686  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.559910  172105 main.go:141] libmachine: (ha-362449) Calling .DriverName
	I0122 20:15:49.560134  172105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:15:49.560168  172105 main.go:141] libmachine: (ha-362449) Calling .GetSSHHostname
	I0122 20:15:49.563023  172105 main.go:141] libmachine: (ha-362449) DBG | domain ha-362449 has defined MAC address 52:54:00:cb:be:9c in network mk-ha-362449
	I0122 20:15:49.563488  172105 main.go:141] libmachine: (ha-362449) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:be:9c", ip: ""} in network mk-ha-362449: {Iface:virbr1 ExpiryTime:2025-01-22 21:10:13 +0000 UTC Type:0 Mac:52:54:00:cb:be:9c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-362449 Clientid:01:52:54:00:cb:be:9c}
	I0122 20:15:49.563516  172105 main.go:141] libmachine: (ha-362449) DBG | domain ha-362449 has defined IP address 192.168.39.245 and MAC address 52:54:00:cb:be:9c in network mk-ha-362449
	I0122 20:15:49.563652  172105 main.go:141] libmachine: (ha-362449) Calling .GetSSHPort
	I0122 20:15:49.563832  172105 main.go:141] libmachine: (ha-362449) Calling .GetSSHKeyPath
	I0122 20:15:49.563974  172105 main.go:141] libmachine: (ha-362449) Calling .GetSSHUsername
	I0122 20:15:49.564102  172105 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/ha-362449/id_rsa Username:docker}
	I0122 20:15:49.641183  172105 ssh_runner.go:195] Run: systemctl --version
	I0122 20:15:49.647559  172105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:15:49.662179  172105 kubeconfig.go:125] found "ha-362449" server: "https://192.168.39.254:8443"
	I0122 20:15:49.662213  172105 api_server.go:166] Checking apiserver status ...
	I0122 20:15:49.662300  172105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 20:15:49.675186  172105 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1185/cgroup
	W0122 20:15:49.686332  172105 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1185/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0122 20:15:49.686411  172105 ssh_runner.go:195] Run: ls
	I0122 20:15:49.690919  172105 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0122 20:15:49.696203  172105 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0122 20:15:49.696254  172105 status.go:463] ha-362449 apiserver status = Running (err=<nil>)
	I0122 20:15:49.696293  172105 status.go:176] ha-362449 status: &{Name:ha-362449 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:15:49.696345  172105 status.go:174] checking status of ha-362449-m02 ...
	I0122 20:15:49.697059  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.697114  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.712720  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44561
	I0122 20:15:49.713160  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.713644  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.713675  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.713989  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.714184  172105 main.go:141] libmachine: (ha-362449-m02) Calling .GetState
	I0122 20:15:49.715682  172105 status.go:371] ha-362449-m02 host status = "Stopped" (err=<nil>)
	I0122 20:15:49.715695  172105 status.go:384] host is not running, skipping remaining checks
	I0122 20:15:49.715701  172105 status.go:176] ha-362449-m02 status: &{Name:ha-362449-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:15:49.715726  172105 status.go:174] checking status of ha-362449-m03 ...
	I0122 20:15:49.716002  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.716058  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.730962  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41063
	I0122 20:15:49.731403  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.731930  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.731962  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.732329  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.732560  172105 main.go:141] libmachine: (ha-362449-m03) Calling .GetState
	I0122 20:15:49.734134  172105 status.go:371] ha-362449-m03 host status = "Running" (err=<nil>)
	I0122 20:15:49.734152  172105 host.go:66] Checking if "ha-362449-m03" exists ...
	I0122 20:15:49.734450  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.734485  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.749524  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0122 20:15:49.750007  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.750515  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.750538  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.750853  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.751007  172105 main.go:141] libmachine: (ha-362449-m03) Calling .GetIP
	I0122 20:15:49.754013  172105 main.go:141] libmachine: (ha-362449-m03) DBG | domain ha-362449-m03 has defined MAC address 52:54:00:e3:4d:b2 in network mk-ha-362449
	I0122 20:15:49.754506  172105 main.go:141] libmachine: (ha-362449-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:4d:b2", ip: ""} in network mk-ha-362449: {Iface:virbr1 ExpiryTime:2025-01-22 21:12:05 +0000 UTC Type:0 Mac:52:54:00:e3:4d:b2 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-362449-m03 Clientid:01:52:54:00:e3:4d:b2}
	I0122 20:15:49.754527  172105 main.go:141] libmachine: (ha-362449-m03) DBG | domain ha-362449-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:e3:4d:b2 in network mk-ha-362449
	I0122 20:15:49.754712  172105 host.go:66] Checking if "ha-362449-m03" exists ...
	I0122 20:15:49.755050  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.755106  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.770452  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0122 20:15:49.770884  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.771343  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.771360  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.771722  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.771875  172105 main.go:141] libmachine: (ha-362449-m03) Calling .DriverName
	I0122 20:15:49.772081  172105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:15:49.772102  172105 main.go:141] libmachine: (ha-362449-m03) Calling .GetSSHHostname
	I0122 20:15:49.774858  172105 main.go:141] libmachine: (ha-362449-m03) DBG | domain ha-362449-m03 has defined MAC address 52:54:00:e3:4d:b2 in network mk-ha-362449
	I0122 20:15:49.775432  172105 main.go:141] libmachine: (ha-362449-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:4d:b2", ip: ""} in network mk-ha-362449: {Iface:virbr1 ExpiryTime:2025-01-22 21:12:05 +0000 UTC Type:0 Mac:52:54:00:e3:4d:b2 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:ha-362449-m03 Clientid:01:52:54:00:e3:4d:b2}
	I0122 20:15:49.775452  172105 main.go:141] libmachine: (ha-362449-m03) DBG | domain ha-362449-m03 has defined IP address 192.168.39.160 and MAC address 52:54:00:e3:4d:b2 in network mk-ha-362449
	I0122 20:15:49.775637  172105 main.go:141] libmachine: (ha-362449-m03) Calling .GetSSHPort
	I0122 20:15:49.775798  172105 main.go:141] libmachine: (ha-362449-m03) Calling .GetSSHKeyPath
	I0122 20:15:49.775953  172105 main.go:141] libmachine: (ha-362449-m03) Calling .GetSSHUsername
	I0122 20:15:49.776072  172105 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/ha-362449-m03/id_rsa Username:docker}
	I0122 20:15:49.857336  172105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:15:49.871600  172105 kubeconfig.go:125] found "ha-362449" server: "https://192.168.39.254:8443"
	I0122 20:15:49.871630  172105 api_server.go:166] Checking apiserver status ...
	I0122 20:15:49.871667  172105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 20:15:49.885196  172105 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup
	W0122 20:15:49.894099  172105 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1141/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0122 20:15:49.894150  172105 ssh_runner.go:195] Run: ls
	I0122 20:15:49.899013  172105 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0122 20:15:49.903529  172105 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0122 20:15:49.903554  172105 status.go:463] ha-362449-m03 apiserver status = Running (err=<nil>)
	I0122 20:15:49.903562  172105 status.go:176] ha-362449-m03 status: &{Name:ha-362449-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:15:49.903575  172105 status.go:174] checking status of ha-362449-m04 ...
	I0122 20:15:49.903858  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.903897  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.919474  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39527
	I0122 20:15:49.920007  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.920551  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.920579  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.920909  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.921112  172105 main.go:141] libmachine: (ha-362449-m04) Calling .GetState
	I0122 20:15:49.922920  172105 status.go:371] ha-362449-m04 host status = "Running" (err=<nil>)
	I0122 20:15:49.922940  172105 host.go:66] Checking if "ha-362449-m04" exists ...
	I0122 20:15:49.923223  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.923258  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.937905  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I0122 20:15:49.938367  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.938885  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.938906  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.939221  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.939424  172105 main.go:141] libmachine: (ha-362449-m04) Calling .GetIP
	I0122 20:15:49.942257  172105 main.go:141] libmachine: (ha-362449-m04) DBG | domain ha-362449-m04 has defined MAC address 52:54:00:df:ca:f0 in network mk-ha-362449
	I0122 20:15:49.942765  172105 main.go:141] libmachine: (ha-362449-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ca:f0", ip: ""} in network mk-ha-362449: {Iface:virbr1 ExpiryTime:2025-01-22 21:13:23 +0000 UTC Type:0 Mac:52:54:00:df:ca:f0 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-362449-m04 Clientid:01:52:54:00:df:ca:f0}
	I0122 20:15:49.942802  172105 main.go:141] libmachine: (ha-362449-m04) DBG | domain ha-362449-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:df:ca:f0 in network mk-ha-362449
	I0122 20:15:49.942939  172105 host.go:66] Checking if "ha-362449-m04" exists ...
	I0122 20:15:49.943263  172105 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:15:49.943307  172105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:15:49.959121  172105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I0122 20:15:49.959629  172105 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:15:49.960138  172105 main.go:141] libmachine: Using API Version  1
	I0122 20:15:49.960165  172105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:15:49.960445  172105 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:15:49.960625  172105 main.go:141] libmachine: (ha-362449-m04) Calling .DriverName
	I0122 20:15:49.960783  172105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:15:49.960801  172105 main.go:141] libmachine: (ha-362449-m04) Calling .GetSSHHostname
	I0122 20:15:49.963520  172105 main.go:141] libmachine: (ha-362449-m04) DBG | domain ha-362449-m04 has defined MAC address 52:54:00:df:ca:f0 in network mk-ha-362449
	I0122 20:15:49.963984  172105 main.go:141] libmachine: (ha-362449-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:ca:f0", ip: ""} in network mk-ha-362449: {Iface:virbr1 ExpiryTime:2025-01-22 21:13:23 +0000 UTC Type:0 Mac:52:54:00:df:ca:f0 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-362449-m04 Clientid:01:52:54:00:df:ca:f0}
	I0122 20:15:49.964022  172105 main.go:141] libmachine: (ha-362449-m04) DBG | domain ha-362449-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:df:ca:f0 in network mk-ha-362449
	I0122 20:15:49.964169  172105 main.go:141] libmachine: (ha-362449-m04) Calling .GetSSHPort
	I0122 20:15:49.964371  172105 main.go:141] libmachine: (ha-362449-m04) Calling .GetSSHKeyPath
	I0122 20:15:49.964501  172105 main.go:141] libmachine: (ha-362449-m04) Calling .GetSSHUsername
	I0122 20:15:49.964648  172105 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/ha-362449-m04/id_rsa Username:docker}
	I0122 20:15:50.044753  172105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:15:50.060102  172105 status.go:176] ha-362449-m04 status: &{Name:ha-362449-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-362449 node start m02 -v=7 --alsologtostderr: (38.955294699s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (469.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-362449 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-362449 -v=7 --alsologtostderr
E0122 20:16:35.695383  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:17:01.591511  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:17.731450  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:19:45.433904  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-362449 -v=7 --alsologtostderr: (4m34.050930573s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-362449 --wait=true -v=7 --alsologtostderr
E0122 20:21:35.695303  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:22:58.767486  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:24:17.730940  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-362449 --wait=true -v=7 --alsologtostderr: (3m15.378982s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-362449
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (469.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-362449 node delete m03 -v=7 --alsologtostderr: (5.745617421s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 stop -v=7 --alsologtostderr
E0122 20:26:35.695696  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-362449 stop -v=7 --alsologtostderr: (4m32.821813499s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr: exit status 7 (113.175537ms)

                                                
                                                
-- stdout --
	ha-362449
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-362449-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-362449-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:29:00.873773  176527 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:29:00.874047  176527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:29:00.874058  176527 out.go:358] Setting ErrFile to fd 2...
	I0122 20:29:00.874062  176527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:29:00.874242  176527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 20:29:00.874427  176527 out.go:352] Setting JSON to false
	I0122 20:29:00.874461  176527 mustload.go:65] Loading cluster: ha-362449
	I0122 20:29:00.874564  176527 notify.go:220] Checking for updates...
	I0122 20:29:00.874840  176527 config.go:182] Loaded profile config "ha-362449": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 20:29:00.874864  176527 status.go:174] checking status of ha-362449 ...
	I0122 20:29:00.875249  176527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:29:00.875306  176527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:29:00.901069  176527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40649
	I0122 20:29:00.901558  176527 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:29:00.902167  176527 main.go:141] libmachine: Using API Version  1
	I0122 20:29:00.902193  176527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:29:00.902583  176527 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:29:00.902842  176527 main.go:141] libmachine: (ha-362449) Calling .GetState
	I0122 20:29:00.904540  176527 status.go:371] ha-362449 host status = "Stopped" (err=<nil>)
	I0122 20:29:00.904555  176527 status.go:384] host is not running, skipping remaining checks
	I0122 20:29:00.904563  176527 status.go:176] ha-362449 status: &{Name:ha-362449 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:29:00.904585  176527 status.go:174] checking status of ha-362449-m02 ...
	I0122 20:29:00.904876  176527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:29:00.904927  176527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:29:00.919412  176527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38615
	I0122 20:29:00.919891  176527 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:29:00.920311  176527 main.go:141] libmachine: Using API Version  1
	I0122 20:29:00.920337  176527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:29:00.920708  176527 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:29:00.920896  176527 main.go:141] libmachine: (ha-362449-m02) Calling .GetState
	I0122 20:29:00.922545  176527 status.go:371] ha-362449-m02 host status = "Stopped" (err=<nil>)
	I0122 20:29:00.922559  176527 status.go:384] host is not running, skipping remaining checks
	I0122 20:29:00.922566  176527 status.go:176] ha-362449-m02 status: &{Name:ha-362449-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:29:00.922589  176527 status.go:174] checking status of ha-362449-m04 ...
	I0122 20:29:00.922877  176527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:29:00.922947  176527 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:29:00.937476  176527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0122 20:29:00.937952  176527 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:29:00.938463  176527 main.go:141] libmachine: Using API Version  1
	I0122 20:29:00.938487  176527 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:29:00.938789  176527 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:29:00.938971  176527 main.go:141] libmachine: (ha-362449-m04) Calling .GetState
	I0122 20:29:00.940664  176527 status.go:371] ha-362449-m04 host status = "Stopped" (err=<nil>)
	I0122 20:29:00.940681  176527 status.go:384] host is not running, skipping remaining checks
	I0122 20:29:00.940687  176527 status.go:176] ha-362449-m04 status: &{Name:ha-362449-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (116.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-362449 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0122 20:29:17.731452  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:30:40.796260  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-362449 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m55.776392739s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (116.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-362449 --control-plane -v=7 --alsologtostderr
E0122 20:31:35.698177  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-362449 --control-plane -v=7 --alsologtostderr: (1m12.267587095s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-362449 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-084850 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-084850 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m20.896926937s)
--- PASS: TestJSONOutput/start/Command (80.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-084850 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-084850 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.51s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-084850 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-084850 --output=json --user=testUser: (6.506037759s)
--- PASS: TestJSONOutput/stop/Command (6.51s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-058582 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-058582 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.065849ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3dca2f0d-0749-440d-970e-a0e20732cd66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-058582] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"75c66328-5884-49f9-908f-0bf1f355e7af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20288"}}
	{"specversion":"1.0","id":"b62aa1c4-7aa2-484a-84fa-f169001bc2d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"afd995ee-8bc1-4e33-9940-ef18d7dd1cf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig"}}
	{"specversion":"1.0","id":"b020be0d-594b-4151-96ae-32e0fed0f62e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube"}}
	{"specversion":"1.0","id":"a596820d-edcf-4c4a-9e36-b232cd68161b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6cd062de-ec1c-4665-8862-56a02a6bf641","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a50d7382-7f5d-4b9a-b36e-363a95ffffc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-058582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-058582
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-976653 --driver=kvm2  --container-runtime=containerd
E0122 20:34:17.731222  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-976653 --driver=kvm2  --container-runtime=containerd: (44.695358647s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-993642 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-993642 --driver=kvm2  --container-runtime=containerd: (46.019191029s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-976653
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-993642
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-993642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-993642
helpers_test.go:175: Cleaning up "first-976653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-976653
--- PASS: TestMinikubeProfile (93.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-611806 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-611806 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.870172986s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-611806 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-611806 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-627845 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-627845 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.743756166s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-627845 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-627845 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-611806 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-627845 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-627845 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-627845
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-627845: (2.281284592s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-627845
E0122 20:36:35.702319  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-627845: (21.803206412s)
--- PASS: TestMountStart/serial/RestartStopped (22.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-627845 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-627845 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327354 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-327354 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m50.515867556s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-327354 -- rollout status deployment/busybox: (3.415228124s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-79946 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-phwj4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-79946 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-phwj4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-79946 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-phwj4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-79946 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-79946 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-phwj4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-327354 -- exec busybox-58667487b6-phwj4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-327354 -v 3 --alsologtostderr
E0122 20:39:17.731323  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-327354 -v 3 --alsologtostderr: (53.272040137s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-327354 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp testdata/cp-test.txt multinode-327354:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp multinode-327354:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile696234261/001/cp-test_multinode-327354.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp multinode-327354:/home/docker/cp-test.txt multinode-327354-m02:/home/docker/cp-test_multinode-327354_multinode-327354-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m02 "sudo cat /home/docker/cp-test_multinode-327354_multinode-327354-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp multinode-327354:/home/docker/cp-test.txt multinode-327354-m03:/home/docker/cp-test_multinode-327354_multinode-327354-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m03 "sudo cat /home/docker/cp-test_multinode-327354_multinode-327354-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp testdata/cp-test.txt multinode-327354-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp multinode-327354-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile696234261/001/cp-test_multinode-327354-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp multinode-327354-m02:/home/docker/cp-test.txt multinode-327354:/home/docker/cp-test_multinode-327354-m02_multinode-327354.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354 "sudo cat /home/docker/cp-test_multinode-327354-m02_multinode-327354.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp multinode-327354-m02:/home/docker/cp-test.txt multinode-327354-m03:/home/docker/cp-test_multinode-327354-m02_multinode-327354-m03.txt
E0122 20:39:38.769496  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m03 "sudo cat /home/docker/cp-test_multinode-327354-m02_multinode-327354-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp testdata/cp-test.txt multinode-327354-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp multinode-327354-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile696234261/001/cp-test_multinode-327354-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp multinode-327354-m03:/home/docker/cp-test.txt multinode-327354:/home/docker/cp-test_multinode-327354-m03_multinode-327354.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354 "sudo cat /home/docker/cp-test_multinode-327354-m03_multinode-327354.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 cp multinode-327354-m03:/home/docker/cp-test.txt multinode-327354-m02:/home/docker/cp-test_multinode-327354-m03_multinode-327354-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 ssh -n multinode-327354-m02 "sudo cat /home/docker/cp-test_multinode-327354-m03_multinode-327354-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-327354 node stop m03: (1.279771556s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-327354 status: exit status 7 (405.899393ms)

                                                
                                                
-- stdout --
	multinode-327354
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-327354-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-327354-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-327354 status --alsologtostderr: exit status 7 (411.814969ms)

                                                
                                                
-- stdout --
	multinode-327354
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-327354-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-327354-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:39:43.302660  184264 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:39:43.302770  184264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:39:43.302779  184264 out.go:358] Setting ErrFile to fd 2...
	I0122 20:39:43.302784  184264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:39:43.302974  184264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 20:39:43.303150  184264 out.go:352] Setting JSON to false
	I0122 20:39:43.303181  184264 mustload.go:65] Loading cluster: multinode-327354
	I0122 20:39:43.303275  184264 notify.go:220] Checking for updates...
	I0122 20:39:43.303678  184264 config.go:182] Loaded profile config "multinode-327354": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 20:39:43.303705  184264 status.go:174] checking status of multinode-327354 ...
	I0122 20:39:43.304284  184264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:39:43.304340  184264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:39:43.320449  184264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40385
	I0122 20:39:43.320842  184264 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:39:43.321432  184264 main.go:141] libmachine: Using API Version  1
	I0122 20:39:43.321457  184264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:39:43.321756  184264 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:39:43.321938  184264 main.go:141] libmachine: (multinode-327354) Calling .GetState
	I0122 20:39:43.323611  184264 status.go:371] multinode-327354 host status = "Running" (err=<nil>)
	I0122 20:39:43.323630  184264 host.go:66] Checking if "multinode-327354" exists ...
	I0122 20:39:43.323903  184264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:39:43.323937  184264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:39:43.339594  184264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I0122 20:39:43.340008  184264 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:39:43.340627  184264 main.go:141] libmachine: Using API Version  1
	I0122 20:39:43.340666  184264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:39:43.341029  184264 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:39:43.341246  184264 main.go:141] libmachine: (multinode-327354) Calling .GetIP
	I0122 20:39:43.344525  184264 main.go:141] libmachine: (multinode-327354) DBG | domain multinode-327354 has defined MAC address 52:54:00:e4:ad:ae in network mk-multinode-327354
	I0122 20:39:43.344970  184264 main.go:141] libmachine: (multinode-327354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:ad:ae", ip: ""} in network mk-multinode-327354: {Iface:virbr1 ExpiryTime:2025-01-22 21:36:58 +0000 UTC Type:0 Mac:52:54:00:e4:ad:ae Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-327354 Clientid:01:52:54:00:e4:ad:ae}
	I0122 20:39:43.345005  184264 main.go:141] libmachine: (multinode-327354) DBG | domain multinode-327354 has defined IP address 192.168.39.117 and MAC address 52:54:00:e4:ad:ae in network mk-multinode-327354
	I0122 20:39:43.345128  184264 host.go:66] Checking if "multinode-327354" exists ...
	I0122 20:39:43.345552  184264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:39:43.345611  184264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:39:43.361622  184264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0122 20:39:43.362084  184264 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:39:43.362500  184264 main.go:141] libmachine: Using API Version  1
	I0122 20:39:43.362523  184264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:39:43.362809  184264 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:39:43.363027  184264 main.go:141] libmachine: (multinode-327354) Calling .DriverName
	I0122 20:39:43.363214  184264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:39:43.363253  184264 main.go:141] libmachine: (multinode-327354) Calling .GetSSHHostname
	I0122 20:39:43.366072  184264 main.go:141] libmachine: (multinode-327354) DBG | domain multinode-327354 has defined MAC address 52:54:00:e4:ad:ae in network mk-multinode-327354
	I0122 20:39:43.366543  184264 main.go:141] libmachine: (multinode-327354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:ad:ae", ip: ""} in network mk-multinode-327354: {Iface:virbr1 ExpiryTime:2025-01-22 21:36:58 +0000 UTC Type:0 Mac:52:54:00:e4:ad:ae Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:multinode-327354 Clientid:01:52:54:00:e4:ad:ae}
	I0122 20:39:43.366573  184264 main.go:141] libmachine: (multinode-327354) DBG | domain multinode-327354 has defined IP address 192.168.39.117 and MAC address 52:54:00:e4:ad:ae in network mk-multinode-327354
	I0122 20:39:43.366751  184264 main.go:141] libmachine: (multinode-327354) Calling .GetSSHPort
	I0122 20:39:43.366915  184264 main.go:141] libmachine: (multinode-327354) Calling .GetSSHKeyPath
	I0122 20:39:43.367042  184264 main.go:141] libmachine: (multinode-327354) Calling .GetSSHUsername
	I0122 20:39:43.367154  184264 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/multinode-327354/id_rsa Username:docker}
	I0122 20:39:43.452614  184264 ssh_runner.go:195] Run: systemctl --version
	I0122 20:39:43.458001  184264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:39:43.471741  184264 kubeconfig.go:125] found "multinode-327354" server: "https://192.168.39.117:8443"
	I0122 20:39:43.471776  184264 api_server.go:166] Checking apiserver status ...
	I0122 20:39:43.471809  184264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0122 20:39:43.484187  184264 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1096/cgroup
	W0122 20:39:43.493125  184264 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1096/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0122 20:39:43.493186  184264 ssh_runner.go:195] Run: ls
	I0122 20:39:43.496956  184264 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8443/healthz ...
	I0122 20:39:43.501387  184264 api_server.go:279] https://192.168.39.117:8443/healthz returned 200:
	ok
	I0122 20:39:43.501407  184264 status.go:463] multinode-327354 apiserver status = Running (err=<nil>)
	I0122 20:39:43.501419  184264 status.go:176] multinode-327354 status: &{Name:multinode-327354 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:39:43.501444  184264 status.go:174] checking status of multinode-327354-m02 ...
	I0122 20:39:43.501814  184264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:39:43.501854  184264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:39:43.517065  184264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0122 20:39:43.517496  184264 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:39:43.518024  184264 main.go:141] libmachine: Using API Version  1
	I0122 20:39:43.518047  184264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:39:43.518369  184264 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:39:43.518543  184264 main.go:141] libmachine: (multinode-327354-m02) Calling .GetState
	I0122 20:39:43.519956  184264 status.go:371] multinode-327354-m02 host status = "Running" (err=<nil>)
	I0122 20:39:43.519975  184264 host.go:66] Checking if "multinode-327354-m02" exists ...
	I0122 20:39:43.520281  184264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:39:43.520327  184264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:39:43.535709  184264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I0122 20:39:43.536196  184264 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:39:43.536717  184264 main.go:141] libmachine: Using API Version  1
	I0122 20:39:43.536739  184264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:39:43.537041  184264 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:39:43.537217  184264 main.go:141] libmachine: (multinode-327354-m02) Calling .GetIP
	I0122 20:39:43.539840  184264 main.go:141] libmachine: (multinode-327354-m02) DBG | domain multinode-327354-m02 has defined MAC address 52:54:00:89:6f:0c in network mk-multinode-327354
	I0122 20:39:43.540235  184264 main.go:141] libmachine: (multinode-327354-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:6f:0c", ip: ""} in network mk-multinode-327354: {Iface:virbr1 ExpiryTime:2025-01-22 21:37:57 +0000 UTC Type:0 Mac:52:54:00:89:6f:0c Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:multinode-327354-m02 Clientid:01:52:54:00:89:6f:0c}
	I0122 20:39:43.540264  184264 main.go:141] libmachine: (multinode-327354-m02) DBG | domain multinode-327354-m02 has defined IP address 192.168.39.34 and MAC address 52:54:00:89:6f:0c in network mk-multinode-327354
	I0122 20:39:43.540422  184264 host.go:66] Checking if "multinode-327354-m02" exists ...
	I0122 20:39:43.540822  184264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:39:43.540870  184264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:39:43.555870  184264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I0122 20:39:43.556316  184264 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:39:43.556796  184264 main.go:141] libmachine: Using API Version  1
	I0122 20:39:43.556814  184264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:39:43.557102  184264 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:39:43.557272  184264 main.go:141] libmachine: (multinode-327354-m02) Calling .DriverName
	I0122 20:39:43.557497  184264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0122 20:39:43.557528  184264 main.go:141] libmachine: (multinode-327354-m02) Calling .GetSSHHostname
	I0122 20:39:43.560068  184264 main.go:141] libmachine: (multinode-327354-m02) DBG | domain multinode-327354-m02 has defined MAC address 52:54:00:89:6f:0c in network mk-multinode-327354
	I0122 20:39:43.560466  184264 main.go:141] libmachine: (multinode-327354-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:6f:0c", ip: ""} in network mk-multinode-327354: {Iface:virbr1 ExpiryTime:2025-01-22 21:37:57 +0000 UTC Type:0 Mac:52:54:00:89:6f:0c Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:multinode-327354-m02 Clientid:01:52:54:00:89:6f:0c}
	I0122 20:39:43.560495  184264 main.go:141] libmachine: (multinode-327354-m02) DBG | domain multinode-327354-m02 has defined IP address 192.168.39.34 and MAC address 52:54:00:89:6f:0c in network mk-multinode-327354
	I0122 20:39:43.560599  184264 main.go:141] libmachine: (multinode-327354-m02) Calling .GetSSHPort
	I0122 20:39:43.560759  184264 main.go:141] libmachine: (multinode-327354-m02) Calling .GetSSHKeyPath
	I0122 20:39:43.560901  184264 main.go:141] libmachine: (multinode-327354-m02) Calling .GetSSHUsername
	I0122 20:39:43.561004  184264 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20288-150966/.minikube/machines/multinode-327354-m02/id_rsa Username:docker}
	I0122 20:39:43.636365  184264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0122 20:39:43.649272  184264 status.go:176] multinode-327354-m02 status: &{Name:multinode-327354-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:39:43.649329  184264 status.go:174] checking status of multinode-327354-m03 ...
	I0122 20:39:43.649753  184264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:39:43.649806  184264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:39:43.665206  184264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37279
	I0122 20:39:43.665643  184264 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:39:43.666123  184264 main.go:141] libmachine: Using API Version  1
	I0122 20:39:43.666146  184264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:39:43.666510  184264 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:39:43.666703  184264 main.go:141] libmachine: (multinode-327354-m03) Calling .GetState
	I0122 20:39:43.668077  184264 status.go:371] multinode-327354-m03 host status = "Stopped" (err=<nil>)
	I0122 20:39:43.668091  184264 status.go:384] host is not running, skipping remaining checks
	I0122 20:39:43.668096  184264 status.go:176] multinode-327354-m03 status: &{Name:multinode-327354-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (34.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-327354 node start m03 -v=7 --alsologtostderr: (33.399555099s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (34.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-327354
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-327354
E0122 20:41:35.702224  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-327354: (3m2.911429622s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327354 --wait=true -v=8 --alsologtostderr
E0122 20:44:17.731461  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-327354 --wait=true -v=8 --alsologtostderr: (2m24.909187336s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-327354
--- PASS: TestMultiNode/serial/RestartKeepsNodes (327.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-327354 node delete m03: (1.437069376s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 stop
E0122 20:46:35.702205  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:47:20.799491  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-327354 stop: (3m1.883808892s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-327354 status: exit status 7 (83.393746ms)

                                                
                                                
-- stdout --
	multinode-327354
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-327354-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-327354 status --alsologtostderr: exit status 7 (82.333077ms)

                                                
                                                
-- stdout --
	multinode-327354
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-327354-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:48:49.582075  186991 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:48:49.582179  186991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:48:49.582191  186991 out.go:358] Setting ErrFile to fd 2...
	I0122 20:48:49.582198  186991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:48:49.582362  186991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 20:48:49.582535  186991 out.go:352] Setting JSON to false
	I0122 20:48:49.582571  186991 mustload.go:65] Loading cluster: multinode-327354
	I0122 20:48:49.582686  186991 notify.go:220] Checking for updates...
	I0122 20:48:49.582962  186991 config.go:182] Loaded profile config "multinode-327354": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 20:48:49.582980  186991 status.go:174] checking status of multinode-327354 ...
	I0122 20:48:49.583403  186991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:48:49.583446  186991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:48:49.598053  186991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0122 20:48:49.598507  186991 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:48:49.599118  186991 main.go:141] libmachine: Using API Version  1
	I0122 20:48:49.599147  186991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:48:49.599453  186991 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:48:49.599621  186991 main.go:141] libmachine: (multinode-327354) Calling .GetState
	I0122 20:48:49.601204  186991 status.go:371] multinode-327354 host status = "Stopped" (err=<nil>)
	I0122 20:48:49.601221  186991 status.go:384] host is not running, skipping remaining checks
	I0122 20:48:49.601227  186991 status.go:176] multinode-327354 status: &{Name:multinode-327354 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0122 20:48:49.601264  186991 status.go:174] checking status of multinode-327354-m02 ...
	I0122 20:48:49.601581  186991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0122 20:48:49.601619  186991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0122 20:48:49.615952  186991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35973
	I0122 20:48:49.616392  186991 main.go:141] libmachine: () Calling .GetVersion
	I0122 20:48:49.616845  186991 main.go:141] libmachine: Using API Version  1
	I0122 20:48:49.616868  186991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0122 20:48:49.617177  186991 main.go:141] libmachine: () Calling .GetMachineName
	I0122 20:48:49.617371  186991 main.go:141] libmachine: (multinode-327354-m02) Calling .GetState
	I0122 20:48:49.618903  186991 status.go:371] multinode-327354-m02 host status = "Stopped" (err=<nil>)
	I0122 20:48:49.618916  186991 status.go:384] host is not running, skipping remaining checks
	I0122 20:48:49.618921  186991 status.go:176] multinode-327354-m02 status: &{Name:multinode-327354-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (107.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327354 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0122 20:49:17.730871  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-327354 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m46.551251029s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-327354 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (107.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-327354
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327354-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-327354-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (66.665788ms)

                                                
                                                
-- stdout --
	* [multinode-327354-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-327354-m02' is duplicated with machine name 'multinode-327354-m02' in profile 'multinode-327354'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-327354-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-327354-m03 --driver=kvm2  --container-runtime=containerd: (43.577203492s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-327354
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-327354: exit status 80 (213.620907ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-327354 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-327354-m03 already exists in multinode-327354-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-327354-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.70s)

                                                
                                    
x
+
TestScheduledStopUnix (110.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-017452 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-017452 --memory=2048 --driver=kvm2  --container-runtime=containerd: (39.235660355s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-017452 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-017452 -n scheduled-stop-017452
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-017452 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0122 20:52:48.661476  158271 retry.go:31] will retry after 116.464µs: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.662673  158271 retry.go:31] will retry after 205.349µs: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.663811  158271 retry.go:31] will retry after 120.255µs: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.664951  158271 retry.go:31] will retry after 437.023µs: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.666094  158271 retry.go:31] will retry after 651.79µs: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.667244  158271 retry.go:31] will retry after 845.179µs: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.668400  158271 retry.go:31] will retry after 1.539884ms: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.670618  158271 retry.go:31] will retry after 1.632375ms: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.672858  158271 retry.go:31] will retry after 2.87626ms: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.676061  158271 retry.go:31] will retry after 3.561327ms: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.680301  158271 retry.go:31] will retry after 3.998953ms: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.684500  158271 retry.go:31] will retry after 9.071113ms: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.693637  158271 retry.go:31] will retry after 19.072523ms: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.712816  158271 retry.go:31] will retry after 11.567333ms: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
I0122 20:52:48.725032  158271 retry.go:31] will retry after 28.775528ms: open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/scheduled-stop-017452/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-017452 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-017452 -n scheduled-stop-017452
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-017452
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-017452 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-017452
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-017452: exit status 7 (65.907326ms)

                                                
                                                
-- stdout --
	scheduled-stop-017452
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-017452 -n scheduled-stop-017452
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-017452 -n scheduled-stop-017452: exit status 7 (63.829702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-017452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-017452
--- PASS: TestScheduledStopUnix (110.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (210.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2031991815 start -p running-upgrade-612357 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2031991815 start -p running-upgrade-612357 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m39.933362484s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-612357 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-612357 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m48.695364496s)
helpers_test.go:175: Cleaning up "running-upgrade-612357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-612357
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-612357: (1.271842691s)
--- PASS: TestRunningBinaryUpgrade (210.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (198.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-638195 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-638195 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m0.929249609s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-638195
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-638195: (1.535596219s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-638195 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-638195 status --format={{.Host}}: exit status 7 (81.147198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-638195 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-638195 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.997677332s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-638195 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-638195 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-638195 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (81.4102ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-638195] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-638195
	    minikube start -p kubernetes-upgrade-638195 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6381952 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-638195 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-638195 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0122 20:56:18.770839  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 20:56:35.695919  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-638195 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (59.114384625s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-638195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-638195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-638195: (1.128901785s)
--- PASS: TestKubernetesUpgrade (198.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466199 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-466199 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (89.39636ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-466199] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (121.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466199 --driver=kvm2  --container-runtime=containerd
E0122 20:54:17.731415  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-466199 --driver=kvm2  --container-runtime=containerd: (2m1.728405167s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-466199 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (121.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (155.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4179140880 start -p stopped-upgrade-116406 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4179140880 start -p stopped-upgrade-116406 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m28.713764127s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4179140880 -p stopped-upgrade-116406 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4179140880 -p stopped-upgrade-116406 stop: (2.027848047s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-116406 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-116406 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m4.640884807s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (155.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466199 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-466199 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m6.406034646s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-466199 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-466199 status -o json: exit status 2 (301.277666ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-466199","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-466199
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-466199: (1.010532799s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (35.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466199 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-466199 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (35.456243719s)
--- PASS: TestNoKubernetes/serial/Start (35.46s)

                                                
                                    
x
+
TestPause/serial/Start (88.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-699986 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-699986 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m28.482114885s)
--- PASS: TestPause/serial/Start (88.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-466199 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-466199 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.708041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.054892914s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-466199
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-466199: (1.348762554s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (30.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-466199 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-466199 --driver=kvm2  --container-runtime=containerd: (30.894384089s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (30.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-116406
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-466199 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-466199 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.951863ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (79.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-699986 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-699986 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m19.945354017s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (79.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-988575 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-988575 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (101.137196ms)

                                                
                                                
-- stdout --
	* [false-988575] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20288
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0122 20:59:18.433362  195243 out.go:345] Setting OutFile to fd 1 ...
	I0122 20:59:18.433789  195243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:59:18.433801  195243 out.go:358] Setting ErrFile to fd 2...
	I0122 20:59:18.433806  195243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0122 20:59:18.434018  195243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20288-150966/.minikube/bin
	I0122 20:59:18.434605  195243 out.go:352] Setting JSON to false
	I0122 20:59:18.435538  195243 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9693,"bootTime":1737569865,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0122 20:59:18.435645  195243 start.go:139] virtualization: kvm guest
	I0122 20:59:18.437439  195243 out.go:177] * [false-988575] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0122 20:59:18.438859  195243 notify.go:220] Checking for updates...
	I0122 20:59:18.438875  195243 out.go:177]   - MINIKUBE_LOCATION=20288
	I0122 20:59:18.440092  195243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0122 20:59:18.441320  195243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20288-150966/kubeconfig
	I0122 20:59:18.442514  195243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20288-150966/.minikube
	I0122 20:59:18.443548  195243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0122 20:59:18.444760  195243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0122 20:59:18.446276  195243 config.go:182] Loaded profile config "cert-expiration-946533": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 20:59:18.446382  195243 config.go:182] Loaded profile config "force-systemd-flag-277306": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 20:59:18.446494  195243 config.go:182] Loaded profile config "pause-699986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0122 20:59:18.446577  195243 driver.go:394] Setting default libvirt URI to qemu:///system
	I0122 20:59:18.482151  195243 out.go:177] * Using the kvm2 driver based on user configuration
	I0122 20:59:18.483257  195243 start.go:297] selected driver: kvm2
	I0122 20:59:18.483272  195243 start.go:901] validating driver "kvm2" against <nil>
	I0122 20:59:18.483286  195243 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0122 20:59:18.485155  195243 out.go:201] 
	W0122 20:59:18.486285  195243 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0122 20:59:18.487461  195243 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-988575 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-988575" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:59:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.58:8443
name: cert-expiration-946533
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:58:34 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.12:8443
name: pause-699986
contexts:
- context:
cluster: cert-expiration-946533
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:59:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-946533
name: cert-expiration-946533
- context:
cluster: pause-699986
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:58:34 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-699986
name: pause-699986
current-context: cert-expiration-946533
kind: Config
preferences: {}
users:
- name: cert-expiration-946533
user:
client-certificate: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/cert-expiration-946533/client.crt
client-key: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/cert-expiration-946533/client.key
- name: pause-699986
user:
client-certificate: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/pause-699986/client.crt
client-key: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/pause-699986/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-988575

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988575"

                                                
                                                
----------------------- debugLogs end: false-988575 [took: 2.796575729s] --------------------------------
helpers_test.go:175: Cleaning up "false-988575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-988575
--- PASS: TestNetworkPlugins/group/false (3.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (188.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-989561 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-989561 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m8.380433415s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (188.38s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-699986 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-699986 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-699986 --output=json --layout=cluster: exit status 2 (262.805368ms)

                                                
                                                
-- stdout --
	{"Name":"pause-699986","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-699986","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-699986 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-699986 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-699986 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-699986 --alsologtostderr -v=5: (1.030551754s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-086882 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-086882 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m43.157157068s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (118.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-000171 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0122 21:01:35.695072  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-000171 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m58.501966529s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (118.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-086882 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [baac83d2-f1ec-460d-be5c-db4ab30516a6] Pending
helpers_test.go:344: "busybox" [baac83d2-f1ec-460d-be5c-db4ab30516a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [baac83d2-f1ec-460d-be5c-db4ab30516a6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004444272s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-086882 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-086882 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-086882 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-086882 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-086882 --alsologtostderr -v=3: (1m31.141143937s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-000171 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [922a9d29-27cc-4a51-a05f-b3b983078c3e] Pending
helpers_test.go:344: "busybox" [922a9d29-27cc-4a51-a05f-b3b983078c3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [922a9d29-27cc-4a51-a05f-b3b983078c3e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003688973s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-000171 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-000171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-000171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017523418s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-000171 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-000171 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-000171 --alsologtostderr -v=3: (1m31.03338535s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-989561 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [03584211-01ba-4ca7-a835-3f651866189b] Pending
helpers_test.go:344: "busybox" [03584211-01ba-4ca7-a835-3f651866189b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [03584211-01ba-4ca7-a835-3f651866189b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004542634s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-989561 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-989561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-989561 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-989561 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-989561 --alsologtostderr -v=3: (1m31.099084213s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-086882 -n no-preload-086882
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-086882 -n no-preload-086882: exit status 7 (66.273922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-086882 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-000171 -n embed-certs-000171
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-000171 -n embed-certs-000171: exit status 7 (64.350748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-000171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (323.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-000171 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0122 21:04:00.801865  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:04:17.731499  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-000171 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (5m23.156682675s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-000171 -n embed-certs-000171
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (323.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-989561 -n old-k8s-version-989561
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-989561 -n old-k8s-version-989561: exit status 7 (72.308971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-989561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (158.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-989561 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0122 21:06:35.695735  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-989561 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m38.608744777s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-989561 -n old-k8s-version-989561
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (158.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ldqwk" [f43f6e7f-8816-41f9-bbea-bd1f21c61ff7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005288467s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ldqwk" [f43f6e7f-8816-41f9-bbea-bd1f21c61ff7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005044976s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-989561 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-989561 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-989561 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-989561 -n old-k8s-version-989561
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-989561 -n old-k8s-version-989561: exit status 2 (258.714607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-989561 -n old-k8s-version-989561
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-989561 -n old-k8s-version-989561: exit status 2 (260.457267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-989561 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-989561 -n old-k8s-version-989561
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-989561 -n old-k8s-version-989561
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-274473 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0122 21:07:53.433603  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:53.440018  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:53.452001  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:53.473882  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:53.515994  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:53.597534  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:53.759384  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:54.081129  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:54.723424  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:56.005703  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:07:58.567332  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:08:03.688982  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:08:13.930993  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-274473 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (47.140295736s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-274473 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-274473 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.223527394s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-274473 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-274473 --alsologtostderr -v=3: (2.37154254s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-274473 -n newest-cni-274473
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-274473 -n newest-cni-274473: exit status 7 (79.1026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-274473 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-274473 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0122 21:08:34.412966  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-274473 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (35.346284994s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-274473 -n newest-cni-274473
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-274473 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-274473 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-274473 -n newest-cni-274473
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-274473 -n newest-cni-274473: exit status 2 (244.716979ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-274473 -n newest-cni-274473
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-274473 -n newest-cni-274473: exit status 2 (249.253179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-274473 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-274473 -n newest-cni-274473
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-274473 -n newest-cni-274473
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (54.532463678s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ck4jm" [426ee18c-c7b2-4648-8c5d-736b9bcc9e2f] Running
E0122 21:09:15.374814  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:09:17.730930  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.012111625s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ck4jm" [426ee18c-c7b2-4648-8c5d-736b9bcc9e2f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004693002s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-000171 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-000171 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-000171 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-000171 -n embed-certs-000171
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-000171 -n embed-certs-000171: exit status 2 (318.059953ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-000171 -n embed-certs-000171
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-000171 -n embed-certs-000171: exit status 2 (251.119582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-000171 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-000171 -n embed-certs-000171
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-000171 -n embed-certs-000171
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m5.087842582s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-988575 "pgrep -a kubelet"
I0122 21:09:53.187996  158271 config.go:182] Loaded profile config "auto-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-988575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zwqzn" [2bc4d6da-420a-4959-9313-7cf4448c871d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zwqzn" [2bc4d6da-420a-4959-9313-7cf4448c871d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004092057s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-988575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m22.06842194s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-p7vjj" [81068282-91f1-4023-832f-7e6ae0eeae59] Running
E0122 21:10:37.296113  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003981288s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-988575 "pgrep -a kubelet"
I0122 21:10:43.174734  158271 config.go:182] Loaded profile config "kindnet-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-988575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jmn87" [e378b888-40ae-4977-b6cd-418c86808138] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jmn87" [e378b888-40ae-4977-b6cd-418c86808138] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004555603s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-988575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0122 21:11:35.695531  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m8.294939657s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-49hdk" [747a7d36-9bdd-49a2-a96a-0525c72889f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00557949s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-988575 "pgrep -a kubelet"
I0122 21:11:46.557652  158271 config.go:182] Loaded profile config "calico-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-988575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gsqj8" [692a0ebc-4325-44c4-8565-67d2f43fc0d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gsqj8" [692a0ebc-4325-44c4-8565-67d2f43fc0d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005618362s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-988575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m6.177179868s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-988575 "pgrep -a kubelet"
I0122 21:12:18.010914  158271 config.go:182] Loaded profile config "custom-flannel-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-988575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ntrhm" [461b1790-38c9-476c-b5c6-adff71c2bbc8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ntrhm" [461b1790-38c9-476c-b5c6-adff71c2bbc8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004772144s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-988575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0122 21:12:53.432927  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:12:58.772588  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (58.482351009s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cmsw6" [5ed89bf9-d8d6-417e-890c-0a2f47a3718f] Running
E0122 21:13:21.138146  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005178818s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-988575 "pgrep -a kubelet"
I0122 21:13:25.653369  158271 config.go:182] Loaded profile config "flannel-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-988575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9kj6z" [50079096-4019-4363-bf08-5080c7a0c94c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9kj6z" [50079096-4019-4363-bf08-5080c7a0c94c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.007071444s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-988575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-988575 "pgrep -a kubelet"
I0122 21:13:43.813907  158271 config.go:182] Loaded profile config "bridge-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-988575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-nmkp2" [48ac6da0-b81f-4481-a503-cd111e446358] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-nmkp2" [48ac6da0-b81f-4481-a503-cd111e446358] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003869275s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-988575 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m23.641484233s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-988575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-988575 "pgrep -a kubelet"
I0122 21:15:16.270182  158271 config.go:182] Loaded profile config "enable-default-cni-988575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-988575 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gwhbw" [20fc62e8-4ec1-4165-ab05-9e4960706213] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gwhbw" [20fc62e8-4ec1-4165-ab05-9e4960706213] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004887455s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-988575 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-988575 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)
E0122 21:15:42.087536  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/kindnet-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:15:47.209270  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/kindnet-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:15:57.451232  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/kindnet-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:15.352445  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/auto-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:17.933148  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/kindnet-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:35.695240  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:40.336269  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:40.342696  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:40.354038  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:40.375456  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:40.416854  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:40.498522  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:40.660124  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:40.981875  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:41.623539  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:42.905391  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:45.466841  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:50.588945  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:16:58.895227  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/kindnet-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:00.830517  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:18.241890  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:18.248236  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:18.259579  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:18.280917  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:18.322226  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:18.403720  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:18.565279  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:18.886996  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:19.529117  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:20.810617  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:21.312472  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:23.373852  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:28.495919  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:37.274731  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/auto-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:38.737337  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:53.433280  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:17:59.219679  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:02.275526  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:19.436205  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:19.442594  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:19.453941  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:19.475310  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:19.516663  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:19.598066  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:19.759578  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:20.081310  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:20.723190  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:20.816689  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/kindnet-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:22.005313  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:24.566760  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:29.688867  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:39.930194  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:40.181904  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:44.030633  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:44.037056  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:44.048447  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:44.069803  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:44.111147  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:44.192589  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:44.354105  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:44.676014  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:45.318060  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:46.599957  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:49.161332  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:18:54.283524  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:00.412384  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:04.525812  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:17.730678  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:24.197789  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:25.007106  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:41.374105  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:19:53.417622  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/auto-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:02.104534  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:05.968510  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:16.465215  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:16.471692  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:16.483092  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:16.504502  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:16.545913  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:16.627513  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:16.789076  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:17.110879  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:17.753097  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:19.034940  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:21.116782  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/auto-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:21.596575  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:26.717938  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:36.954927  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/kindnet-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:36.959238  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:40.804013  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:20:57.440653  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:03.297513  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:04.658977  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/kindnet-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:27.890553  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:35.695525  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:38.402038  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:21:40.336593  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:08.039222  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:18.241917  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:45.945900  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:22:53.433732  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:23:00.324060  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:23:19.436222  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:23:44.030390  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:23:47.138999  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:24:11.731981  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:24:16.500327  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:24:17.731084  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:24:53.417148  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/auto-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:16.464418  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:36.956003  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/kindnet-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:25:44.165762  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/enable-default-cni-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:26:35.695095  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:26:40.336400  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/calico-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:27:18.241840  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/custom-flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:27:53.433022  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/old-k8s-version-989561/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:28:19.436094  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/flannel-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:28:44.031314  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/bridge-988575/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:29:17.731295  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:29:38.774870  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/addons-964261/client.crt: no such file or directory" logger="UnhandledError"
E0122 21:29:53.415263  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/auto-988575/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test skip (33/316)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-512398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-512398
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E0122 20:59:17.731394  158271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/functional-381178/client.crt: no such file or directory" logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: kubenet-988575 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-988575" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:58:34 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.12:8443
name: pause-699986
contexts:
- context:
cluster: pause-699986
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:58:34 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-699986
name: pause-699986
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-699986
user:
client-certificate: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/pause-699986/client.crt
client-key: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/pause-699986/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-988575

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988575"

                                                
                                                
----------------------- debugLogs end: kubenet-988575 [took: 3.018515283s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-988575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-988575
--- SKIP: TestNetworkPlugins/group/kubenet (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-988575 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-988575" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:59:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.58:8443
name: cert-expiration-946533
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20288-150966/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:58:34 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.12:8443
name: pause-699986
contexts:
- context:
cluster: cert-expiration-946533
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:59:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-946533
name: cert-expiration-946533
- context:
cluster: pause-699986
extensions:
- extension:
last-update: Wed, 22 Jan 2025 20:58:34 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-699986
name: pause-699986
current-context: cert-expiration-946533
kind: Config
preferences: {}
users:
- name: cert-expiration-946533
user:
client-certificate: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/cert-expiration-946533/client.crt
client-key: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/cert-expiration-946533/client.key
- name: pause-699986
user:
client-certificate: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/pause-699986/client.crt
client-key: /home/jenkins/minikube-integration/20288-150966/.minikube/profiles/pause-699986/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-988575

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-988575" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988575"

                                                
                                                
----------------------- debugLogs end: cilium-988575 [took: 3.497452476s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-988575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-988575
--- SKIP: TestNetworkPlugins/group/cilium (3.65s)

                                                
                                    
Copied to clipboard